Test Report: Docker_Linux_crio_arm64 17243

                    
                      a4c3e20099a4bdf499fee0d2faaf79bc020e16c9:2023-09-14:31017
                    
                

Test fail (8/298)

x
+
TestAddons/parallel/Ingress (168.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-909789 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-909789 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-909789 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8c805331-4cf1-467e-861c-ff798bf58de0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8c805331-4cf1-467e-861c-ff798bf58de0] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.017733041s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-909789 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.724961132s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-909789 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.052206532s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-909789 addons disable ingress-dns --alsologtostderr -v=1: (1.025678264s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-909789 addons disable ingress --alsologtostderr -v=1: (7.766747723s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-909789
helpers_test.go:235: (dbg) docker inspect addons-909789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1",
	        "Created": "2023-09-14T22:27:26.470370515Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2847058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T22:27:26.77873066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/hosts",
	        "LogPath": "/var/lib/docker/containers/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1-json.log",
	        "Name": "/addons-909789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-909789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-909789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fb222b215f42eda5f086fe47041cc4744034ff20f813539731093cd26566a1ac-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fb222b215f42eda5f086fe47041cc4744034ff20f813539731093cd26566a1ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fb222b215f42eda5f086fe47041cc4744034ff20f813539731093cd26566a1ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fb222b215f42eda5f086fe47041cc4744034ff20f813539731093cd26566a1ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-909789",
	                "Source": "/var/lib/docker/volumes/addons-909789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-909789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-909789",
	                "name.minikube.sigs.k8s.io": "addons-909789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ff7eaea54fb53a046f06a32fa7f889d62b85fc07d252605e0901129f7d0fcd6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36387"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36384"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36386"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36385"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ff7eaea54fb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-909789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "775b74b83176",
	                        "addons-909789"
	                    ],
	                    "NetworkID": "3b76ca2e343863282581dae16eea2be7517751d91302c0e0fd185f37c286a336",
	                    "EndpointID": "df71ee1c42e33db05671efcd3eb7617eb37a639a2f9621cfa17decc86bfb7021",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-909789 -n addons-909789
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-909789 logs -n 25: (1.629806739s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-170237   | jenkins | v1.31.2 | 14 Sep 23 22:26 UTC |                     |
	|         | -p download-only-170237        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-170237   | jenkins | v1.31.2 | 14 Sep 23 22:26 UTC |                     |
	|         | -p download-only-170237        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:27 UTC |
	| delete  | -p download-only-170237        | download-only-170237   | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:27 UTC |
	| delete  | -p download-only-170237        | download-only-170237   | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:27 UTC |
	| start   | --download-only -p             | download-docker-646598 | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC |                     |
	|         | download-docker-646598         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-646598      | download-docker-646598 | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:27 UTC |
	| start   | --download-only -p             | binary-mirror-374730   | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC |                     |
	|         | binary-mirror-374730           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35183         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-374730        | binary-mirror-374730   | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:27 UTC |
	| start   | -p addons-909789               | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:27 UTC | 14 Sep 23 22:29 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:29 UTC | 14 Sep 23 22:29 UTC |
	|         | addons-909789                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:29 UTC | 14 Sep 23 22:29 UTC |
	|         | -p addons-909789               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-909789 ip               | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	| addons  | addons-909789 addons disable   | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ssh     | addons-909789 ssh curl -s      | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-909789 addons           | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-909789 addons           | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:30 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-909789 addons           | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:30 UTC | 14 Sep 23 22:31 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:31 UTC | 14 Sep 23 22:31 UTC |
	|         | addons-909789                  |                        |         |         |                     |                     |
	| ip      | addons-909789 ip               | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:32 UTC | 14 Sep 23 22:32 UTC |
	| addons  | addons-909789 addons disable   | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:32 UTC | 14 Sep 23 22:32 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-909789 addons disable   | addons-909789          | jenkins | v1.31.2 | 14 Sep 23 22:32 UTC | 14 Sep 23 22:32 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:27:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:27:03.284582 2846606 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:27:03.284718 2846606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:27:03.284752 2846606 out.go:309] Setting ErrFile to fd 2...
	I0914 22:27:03.284764 2846606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:27:03.285025 2846606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:27:03.285485 2846606 out.go:303] Setting JSON to false
	I0914 22:27:03.286580 2846606 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":79768,"bootTime":1694650655,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:27:03.286650 2846606 start.go:138] virtualization:  
	I0914 22:27:03.289413 2846606 out.go:177] * [addons-909789] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:27:03.291982 2846606 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:27:03.294038 2846606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:27:03.292124 2846606 notify.go:220] Checking for updates...
	I0914 22:27:03.297818 2846606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:27:03.299936 2846606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:27:03.302063 2846606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:27:03.304380 2846606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:27:03.306600 2846606 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:27:03.331643 2846606 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:27:03.331742 2846606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:27:03.406847 2846606 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-14 22:27:03.397397503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:27:03.406956 2846606 docker.go:294] overlay module found
	I0914 22:27:03.410778 2846606 out.go:177] * Using the docker driver based on user configuration
	I0914 22:27:03.412816 2846606 start.go:298] selected driver: docker
	I0914 22:27:03.412832 2846606 start.go:902] validating driver "docker" against <nil>
	I0914 22:27:03.412846 2846606 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:27:03.413464 2846606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:27:03.478711 2846606 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-14 22:27:03.469486257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:27:03.478868 2846606 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 22:27:03.479093 2846606 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:27:03.481112 2846606 out.go:177] * Using Docker driver with root privileges
	I0914 22:27:03.483095 2846606 cni.go:84] Creating CNI manager for ""
	I0914 22:27:03.483113 2846606 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:27:03.483125 2846606 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 22:27:03.483140 2846606 start_flags.go:321] config:
	{Name:addons-909789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-909789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:27:03.485466 2846606 out.go:177] * Starting control plane node addons-909789 in cluster addons-909789
	I0914 22:27:03.487253 2846606 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:27:03.489129 2846606 out.go:177] * Pulling base image ...
	I0914 22:27:03.490824 2846606 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:27:03.490845 2846606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:27:03.490875 2846606 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 22:27:03.490884 2846606 cache.go:57] Caching tarball of preloaded images
	I0914 22:27:03.490951 2846606 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 22:27:03.490961 2846606 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:27:03.491319 2846606 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/config.json ...
	I0914 22:27:03.491341 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/config.json: {Name:mk156cb73c72eb14bc20a6afc4493022b5e39b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:03.507930 2846606 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 to local cache
	I0914 22:27:03.508027 2846606 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local cache directory
	I0914 22:27:03.508049 2846606 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local cache directory, skipping pull
	I0914 22:27:03.508057 2846606 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in cache, skipping pull
	I0914 22:27:03.508065 2846606 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 as a tarball
	I0914 22:27:03.508076 2846606 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 from local cache
	I0914 22:27:19.090146 2846606 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 from cached tarball
	I0914 22:27:19.090179 2846606 cache.go:195] Successfully downloaded all kic artifacts
	I0914 22:27:19.090209 2846606 start.go:365] acquiring machines lock for addons-909789: {Name:mk901066bcd7a264b5bdb7de6da5e1202472a57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:27:19.090695 2846606 start.go:369] acquired machines lock for "addons-909789" in 457.222µs
	I0914 22:27:19.090730 2846606 start.go:93] Provisioning new machine with config: &{Name:addons-909789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-909789 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:27:19.090822 2846606 start.go:125] createHost starting for "" (driver="docker")
	I0914 22:27:19.093383 2846606 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 22:27:19.093632 2846606 start.go:159] libmachine.API.Create for "addons-909789" (driver="docker")
	I0914 22:27:19.093655 2846606 client.go:168] LocalClient.Create starting
	I0914 22:27:19.093770 2846606 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 22:27:19.615288 2846606 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 22:27:20.025166 2846606 cli_runner.go:164] Run: docker network inspect addons-909789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 22:27:20.042258 2846606 cli_runner.go:211] docker network inspect addons-909789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 22:27:20.042335 2846606 network_create.go:281] running [docker network inspect addons-909789] to gather additional debugging logs...
	I0914 22:27:20.042358 2846606 cli_runner.go:164] Run: docker network inspect addons-909789
	W0914 22:27:20.059336 2846606 cli_runner.go:211] docker network inspect addons-909789 returned with exit code 1
	I0914 22:27:20.059370 2846606 network_create.go:284] error running [docker network inspect addons-909789]: docker network inspect addons-909789: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-909789 not found
	I0914 22:27:20.059384 2846606 network_create.go:286] output of [docker network inspect addons-909789]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-909789 not found
	
	** /stderr **
	I0914 22:27:20.059444 2846606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:27:20.077229 2846606 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001166860}
	I0914 22:27:20.077269 2846606 network_create.go:123] attempt to create docker network addons-909789 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 22:27:20.077327 2846606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-909789 addons-909789
	I0914 22:27:20.150694 2846606 network_create.go:107] docker network addons-909789 192.168.49.0/24 created
	I0914 22:27:20.150722 2846606 kic.go:117] calculated static IP "192.168.49.2" for the "addons-909789" container
	I0914 22:27:20.150797 2846606 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 22:27:20.170591 2846606 cli_runner.go:164] Run: docker volume create addons-909789 --label name.minikube.sigs.k8s.io=addons-909789 --label created_by.minikube.sigs.k8s.io=true
	I0914 22:27:20.191021 2846606 oci.go:103] Successfully created a docker volume addons-909789
	I0914 22:27:20.191115 2846606 cli_runner.go:164] Run: docker run --rm --name addons-909789-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-909789 --entrypoint /usr/bin/test -v addons-909789:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib
	I0914 22:27:22.291277 2846606 cli_runner.go:217] Completed: docker run --rm --name addons-909789-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-909789 --entrypoint /usr/bin/test -v addons-909789:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib: (2.100119014s)
	I0914 22:27:22.291313 2846606 oci.go:107] Successfully prepared a docker volume addons-909789
	I0914 22:27:22.291338 2846606 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:27:22.291358 2846606 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 22:27:22.291436 2846606 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-909789:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 22:27:26.390829 2846606 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-909789:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir: (4.09934261s)
	I0914 22:27:26.390870 2846606 kic.go:199] duration metric: took 4.099501 seconds to extract preloaded images to volume
	W0914 22:27:26.390996 2846606 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 22:27:26.391143 2846606 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 22:27:26.454477 2846606 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-909789 --name addons-909789 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-909789 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-909789 --network addons-909789 --ip 192.168.49.2 --volume addons-909789:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:27:26.787320 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Running}}
	I0914 22:27:26.811176 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:27:26.835707 2846606 cli_runner.go:164] Run: docker exec addons-909789 stat /var/lib/dpkg/alternatives/iptables
	I0914 22:27:26.917351 2846606 oci.go:144] the created container "addons-909789" has a running status.
	I0914 22:27:26.917379 2846606 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa...
	I0914 22:27:27.333493 2846606 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 22:27:27.365020 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:27:27.388312 2846606 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 22:27:27.388333 2846606 kic_runner.go:114] Args: [docker exec --privileged addons-909789 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 22:27:27.496905 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:27:27.542562 2846606 machine.go:88] provisioning docker machine ...
	I0914 22:27:27.542591 2846606 ubuntu.go:169] provisioning hostname "addons-909789"
	I0914 22:27:27.542983 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:27.590408 2846606 main.go:141] libmachine: Using SSH client type: native
	I0914 22:27:27.590857 2846606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36388 <nil> <nil>}
	I0914 22:27:27.590881 2846606 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-909789 && echo "addons-909789" | sudo tee /etc/hostname
	I0914 22:27:27.591427 2846606 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48416->127.0.0.1:36388: read: connection reset by peer
	I0914 22:27:30.756333 2846606 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-909789
	
	I0914 22:27:30.756418 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:30.776270 2846606 main.go:141] libmachine: Using SSH client type: native
	I0914 22:27:30.776708 2846606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36388 <nil> <nil>}
	I0914 22:27:30.776733 2846606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-909789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-909789/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-909789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:27:30.921585 2846606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:27:30.921655 2846606 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 22:27:30.921691 2846606 ubuntu.go:177] setting up certificates
	I0914 22:27:30.921728 2846606 provision.go:83] configureAuth start
	I0914 22:27:30.921809 2846606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-909789
	I0914 22:27:30.941974 2846606 provision.go:138] copyHostCerts
	I0914 22:27:30.942050 2846606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 22:27:30.942178 2846606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 22:27:30.942240 2846606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 22:27:30.942286 2846606 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.addons-909789 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-909789]
	I0914 22:27:31.734310 2846606 provision.go:172] copyRemoteCerts
	I0914 22:27:31.734375 2846606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:27:31.734420 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:31.755164 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:27:31.858816 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:27:31.886489 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 22:27:31.914062 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:27:31.941738 2846606 provision.go:86] duration metric: configureAuth took 1.019977802s
	I0914 22:27:31.941763 2846606 ubuntu.go:193] setting minikube options for container-runtime
	I0914 22:27:31.941949 2846606 config.go:182] Loaded profile config "addons-909789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:27:31.942058 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:31.960238 2846606 main.go:141] libmachine: Using SSH client type: native
	I0914 22:27:31.960687 2846606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36388 <nil> <nil>}
	I0914 22:27:31.960709 2846606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:27:32.214978 2846606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:27:32.214999 2846606 machine.go:91] provisioned docker machine in 4.672418817s
	I0914 22:27:32.215009 2846606 client.go:171] LocalClient.Create took 13.121348993s
	I0914 22:27:32.215030 2846606 start.go:167] duration metric: libmachine.API.Create for "addons-909789" took 13.121398847s
	I0914 22:27:32.215038 2846606 start.go:300] post-start starting for "addons-909789" (driver="docker")
	I0914 22:27:32.215050 2846606 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:27:32.215110 2846606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:27:32.215148 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:32.232618 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:27:32.335695 2846606 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:27:32.339649 2846606 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 22:27:32.339693 2846606 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 22:27:32.339705 2846606 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 22:27:32.339715 2846606 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 22:27:32.339727 2846606 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 22:27:32.339790 2846606 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 22:27:32.339815 2846606 start.go:303] post-start completed in 124.771832ms
	I0914 22:27:32.340119 2846606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-909789
	I0914 22:27:32.357782 2846606 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/config.json ...
	I0914 22:27:32.358065 2846606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:27:32.358121 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:32.375455 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:27:32.474436 2846606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 22:27:32.479885 2846606 start.go:128] duration metric: createHost completed in 13.389046945s
	I0914 22:27:32.479954 2846606 start.go:83] releasing machines lock for "addons-909789", held for 13.389243186s
	I0914 22:27:32.480045 2846606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-909789
	I0914 22:27:32.498949 2846606 ssh_runner.go:195] Run: cat /version.json
	I0914 22:27:32.499011 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:32.499264 2846606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:27:32.499326 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:27:32.522387 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:27:32.528592 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:27:32.621146 2846606 ssh_runner.go:195] Run: systemctl --version
	I0914 22:27:32.758909 2846606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:27:32.906880 2846606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:27:32.912100 2846606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:27:32.935849 2846606 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 22:27:32.935978 2846606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:27:32.971832 2846606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 22:27:32.971910 2846606 start.go:469] detecting cgroup driver to use...
	I0914 22:27:32.971983 2846606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 22:27:32.972072 2846606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:27:32.991984 2846606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:27:33.005743 2846606 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:27:33.005809 2846606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:27:33.021917 2846606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:27:33.039244 2846606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:27:33.130938 2846606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:27:33.230841 2846606 docker.go:212] disabling docker service ...
	I0914 22:27:33.230904 2846606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:27:33.252405 2846606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:27:33.266563 2846606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:27:33.356709 2846606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:27:33.465912 2846606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:27:33.478971 2846606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:27:33.498391 2846606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:27:33.498462 2846606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:27:33.509938 2846606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:27:33.510026 2846606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:27:33.521675 2846606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:27:33.533214 2846606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:27:33.544759 2846606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:27:33.555511 2846606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:27:33.565372 2846606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:27:33.575081 2846606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:27:33.670758 2846606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:27:33.789439 2846606 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:27:33.789543 2846606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:27:33.793987 2846606 start.go:537] Will wait 60s for crictl version
	I0914 22:27:33.794090 2846606 ssh_runner.go:195] Run: which crictl
	I0914 22:27:33.798320 2846606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:27:33.842204 2846606 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 22:27:33.842302 2846606 ssh_runner.go:195] Run: crio --version
	I0914 22:27:33.887163 2846606 ssh_runner.go:195] Run: crio --version
	I0914 22:27:33.940184 2846606 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0914 22:27:33.942564 2846606 cli_runner.go:164] Run: docker network inspect addons-909789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:27:33.963966 2846606 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 22:27:33.968516 2846606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:27:33.981518 2846606 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:27:33.981582 2846606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:27:34.048255 2846606 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:27:34.048276 2846606 crio.go:415] Images already preloaded, skipping extraction
	I0914 22:27:34.048332 2846606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:27:34.092013 2846606 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:27:34.092032 2846606 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:27:34.092131 2846606 ssh_runner.go:195] Run: crio config
	I0914 22:27:34.146033 2846606 cni.go:84] Creating CNI manager for ""
	I0914 22:27:34.146057 2846606 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:27:34.146113 2846606 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:27:34.146140 2846606 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-909789 NodeName:addons-909789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:27:34.146289 2846606 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-909789"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:27:34.146371 2846606 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-909789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-909789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:27:34.146439 2846606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:27:34.157076 2846606 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:27:34.157147 2846606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:27:34.167722 2846606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0914 22:27:34.188419 2846606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:27:34.209576 2846606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0914 22:27:34.230053 2846606 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 22:27:34.234334 2846606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:27:34.247473 2846606 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789 for IP: 192.168.49.2
	I0914 22:27:34.247503 2846606 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:34.247625 2846606 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 22:27:35.078640 2846606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt ...
	I0914 22:27:35.078676 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt: {Name:mk51d7ad05ca460fe20765345057ed55a96223a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:35.079319 2846606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key ...
	I0914 22:27:35.079334 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key: {Name:mkccce917c139fd2d32afc6b4258f585cbafb8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:35.079894 2846606 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 22:27:35.576396 2846606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt ...
	I0914 22:27:35.576426 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt: {Name:mk659395c47322007fc5ba339a2eca56072ee367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:35.577186 2846606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key ...
	I0914 22:27:35.577201 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key: {Name:mk85556723fbf99dd5d627fd9d0ae917269782ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:35.577327 2846606 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.key
	I0914 22:27:35.577346 2846606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt with IP's: []
	I0914 22:27:36.262700 2846606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt ...
	I0914 22:27:36.262729 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: {Name:mka908b6c8ee3aaf459f4993593a80122c4cc288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:36.262928 2846606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.key ...
	I0914 22:27:36.262941 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.key: {Name:mkadd6477451d95897d7e17678562e0d5b6af516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:36.263648 2846606 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key.dd3b5fb2
	I0914 22:27:36.263669 2846606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 22:27:37.583848 2846606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt.dd3b5fb2 ...
	I0914 22:27:37.583878 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt.dd3b5fb2: {Name:mka65da7ebae7ef4e2ef64e4a9637c9c06be23c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:37.584068 2846606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key.dd3b5fb2 ...
	I0914 22:27:37.584081 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key.dd3b5fb2: {Name:mk0d78a27193eaf95a79921d4c1c10ffba5aef44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:37.584166 2846606 certs.go:337] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt
	I0914 22:27:37.584247 2846606 certs.go:341] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key
	I0914 22:27:37.584299 2846606 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.key
	I0914 22:27:37.584321 2846606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.crt with IP's: []
	I0914 22:27:38.271516 2846606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.crt ...
	I0914 22:27:38.271551 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.crt: {Name:mk958590ceaa8523dc90655874155432a5f15418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:38.271755 2846606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.key ...
	I0914 22:27:38.271769 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.key: {Name:mkcf3d01309e6a3befd420eada3af835046827e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:27:38.271962 2846606 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:27:38.272009 2846606 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:27:38.272041 2846606 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:27:38.272070 2846606 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 22:27:38.272680 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:27:38.302354 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:27:38.331209 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:27:38.359044 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 22:27:38.386801 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:27:38.414551 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 22:27:38.442794 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:27:38.470885 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:27:38.499269 2846606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:27:38.528030 2846606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:27:38.549162 2846606 ssh_runner.go:195] Run: openssl version
	I0914 22:27:38.556155 2846606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:27:38.567607 2846606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:27:38.572206 2846606 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:27:38.572272 2846606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:27:38.581107 2846606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:27:38.592586 2846606 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:27:38.596960 2846606 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:27:38.597007 2846606 kubeadm.go:404] StartCluster: {Name:addons-909789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-909789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:27:38.597084 2846606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:27:38.597154 2846606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:27:38.641477 2846606 cri.go:89] found id: ""
	I0914 22:27:38.641589 2846606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:27:38.652298 2846606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:27:38.663307 2846606 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 22:27:38.663369 2846606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:27:38.674485 2846606 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:27:38.674525 2846606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 22:27:38.775290 2846606 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:27:38.861059 2846606 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:27:53.183264 2846606 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:27:53.183316 2846606 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:27:53.183398 2846606 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:27:53.183450 2846606 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:27:53.183482 2846606 kubeadm.go:322] OS: Linux
	I0914 22:27:53.183524 2846606 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 22:27:53.183569 2846606 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 22:27:53.183613 2846606 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 22:27:53.183660 2846606 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 22:27:53.183704 2846606 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 22:27:53.183751 2846606 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 22:27:53.183793 2846606 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0914 22:27:53.183838 2846606 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0914 22:27:53.183882 2846606 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0914 22:27:53.183949 2846606 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:27:53.184036 2846606 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:27:53.184121 2846606 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:27:53.184178 2846606 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:27:53.186244 2846606 out.go:204]   - Generating certificates and keys ...
	I0914 22:27:53.186330 2846606 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:27:53.186399 2846606 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:27:53.186461 2846606 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 22:27:53.186514 2846606 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 22:27:53.186570 2846606 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 22:27:53.186618 2846606 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 22:27:53.186668 2846606 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 22:27:53.186782 2846606 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-909789 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:27:53.186838 2846606 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 22:27:53.186945 2846606 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-909789 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:27:53.187006 2846606 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 22:27:53.187065 2846606 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 22:27:53.187106 2846606 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 22:27:53.187158 2846606 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:27:53.187205 2846606 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:27:53.187255 2846606 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:27:53.187314 2846606 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:27:53.187365 2846606 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:27:53.187440 2846606 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:27:53.187501 2846606 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:27:53.189586 2846606 out.go:204]   - Booting up control plane ...
	I0914 22:27:53.189772 2846606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:27:53.189903 2846606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:27:53.189982 2846606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:27:53.190087 2846606 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:27:53.190174 2846606 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:27:53.190214 2846606 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:27:53.190369 2846606 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:27:53.190446 2846606 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002794 seconds
	I0914 22:27:53.190556 2846606 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:27:53.190690 2846606 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:27:53.190751 2846606 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:27:53.190933 2846606 kubeadm.go:322] [mark-control-plane] Marking the node addons-909789 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:27:53.190996 2846606 kubeadm.go:322] [bootstrap-token] Using token: plicfd.58h2ad8ywf0ha12d
	I0914 22:27:53.192987 2846606 out.go:204]   - Configuring RBAC rules ...
	I0914 22:27:53.193095 2846606 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:27:53.193180 2846606 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:27:53.193321 2846606 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:27:53.193449 2846606 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:27:53.193565 2846606 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:27:53.193665 2846606 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:27:53.193781 2846606 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:27:53.193825 2846606 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:27:53.193871 2846606 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:27:53.193875 2846606 kubeadm.go:322] 
	I0914 22:27:53.193936 2846606 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:27:53.193940 2846606 kubeadm.go:322] 
	I0914 22:27:53.194020 2846606 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:27:53.194025 2846606 kubeadm.go:322] 
	I0914 22:27:53.194051 2846606 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:27:53.194111 2846606 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:27:53.194162 2846606 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:27:53.194166 2846606 kubeadm.go:322] 
	I0914 22:27:53.194221 2846606 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:27:53.194226 2846606 kubeadm.go:322] 
	I0914 22:27:53.194274 2846606 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:27:53.194279 2846606 kubeadm.go:322] 
	I0914 22:27:53.194332 2846606 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:27:53.194408 2846606 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:27:53.194477 2846606 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:27:53.194481 2846606 kubeadm.go:322] 
	I0914 22:27:53.194566 2846606 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:27:53.194644 2846606 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:27:53.194649 2846606 kubeadm.go:322] 
	I0914 22:27:53.194742 2846606 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token plicfd.58h2ad8ywf0ha12d \
	I0914 22:27:53.194849 2846606 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 22:27:53.194870 2846606 kubeadm.go:322] 	--control-plane 
	I0914 22:27:53.194874 2846606 kubeadm.go:322] 
	I0914 22:27:53.194960 2846606 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:27:53.194965 2846606 kubeadm.go:322] 
	I0914 22:27:53.195048 2846606 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token plicfd.58h2ad8ywf0ha12d \
	I0914 22:27:53.195159 2846606 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:27:53.195167 2846606 cni.go:84] Creating CNI manager for ""
	I0914 22:27:53.195174 2846606 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:27:53.197364 2846606 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 22:27:53.199495 2846606 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:27:53.213359 2846606 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:27:53.213380 2846606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:27:53.238014 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:27:54.083158 2846606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:27:54.083292 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:54.083382 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=addons-909789 minikube.k8s.io/updated_at=2023_09_14T22_27_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:54.101194 2846606 ops.go:34] apiserver oom_adj: -16
	I0914 22:27:54.200905 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:54.322298 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:54.922153 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:55.422060 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:55.921814 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:56.422541 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:56.921888 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:57.421814 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:57.922640 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:58.422156 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:58.922118 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:59.422641 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:27:59.922767 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:00.421810 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:00.922768 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:01.421816 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:01.922541 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:02.421870 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:02.921830 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:03.422164 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:03.922565 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:04.422774 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:04.922589 2846606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:28:05.030318 2846606 kubeadm.go:1081] duration metric: took 10.947072319s to wait for elevateKubeSystemPrivileges.
	I0914 22:28:05.030344 2846606 kubeadm.go:406] StartCluster complete in 26.433340641s
	I0914 22:28:05.030361 2846606 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:28:05.030475 2846606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:28:05.030869 2846606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:28:05.032744 2846606 config.go:182] Loaded profile config "addons-909789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:28:05.032795 2846606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:28:05.032894 2846606 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0914 22:28:05.032967 2846606 addons.go:69] Setting volumesnapshots=true in profile "addons-909789"
	I0914 22:28:05.032979 2846606 addons.go:231] Setting addon volumesnapshots=true in "addons-909789"
	I0914 22:28:05.033012 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.033455 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.033850 2846606 addons.go:69] Setting cloud-spanner=true in profile "addons-909789"
	I0914 22:28:05.033876 2846606 addons.go:231] Setting addon cloud-spanner=true in "addons-909789"
	I0914 22:28:05.033917 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.034310 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.034768 2846606 addons.go:69] Setting inspektor-gadget=true in profile "addons-909789"
	I0914 22:28:05.034808 2846606 addons.go:231] Setting addon inspektor-gadget=true in "addons-909789"
	I0914 22:28:05.034857 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.035261 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.035598 2846606 addons.go:69] Setting metrics-server=true in profile "addons-909789"
	I0914 22:28:05.035623 2846606 addons.go:231] Setting addon metrics-server=true in "addons-909789"
	I0914 22:28:05.035662 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.036054 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.036352 2846606 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-909789"
	I0914 22:28:05.036391 2846606 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-909789"
	I0914 22:28:05.036421 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.036821 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.044656 2846606 addons.go:69] Setting registry=true in profile "addons-909789"
	I0914 22:28:05.044700 2846606 addons.go:231] Setting addon registry=true in "addons-909789"
	I0914 22:28:05.044746 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.045254 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.055708 2846606 addons.go:69] Setting default-storageclass=true in profile "addons-909789"
	I0914 22:28:05.055749 2846606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-909789"
	I0914 22:28:05.056096 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.058651 2846606 addons.go:69] Setting storage-provisioner=true in profile "addons-909789"
	I0914 22:28:05.058688 2846606 addons.go:231] Setting addon storage-provisioner=true in "addons-909789"
	I0914 22:28:05.058739 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.059187 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.080637 2846606 addons.go:69] Setting gcp-auth=true in profile "addons-909789"
	I0914 22:28:05.080862 2846606 mustload.go:65] Loading cluster: addons-909789
	I0914 22:28:05.089932 2846606 addons.go:69] Setting ingress=true in profile "addons-909789"
	I0914 22:28:05.089971 2846606 addons.go:231] Setting addon ingress=true in "addons-909789"
	I0914 22:28:05.090027 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.090486 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.115063 2846606 addons.go:69] Setting ingress-dns=true in profile "addons-909789"
	I0914 22:28:05.115095 2846606 addons.go:231] Setting addon ingress-dns=true in "addons-909789"
	I0914 22:28:05.115150 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.115588 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.132861 2846606 config.go:182] Loaded profile config "addons-909789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:28:05.133134 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.197501 2846606 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0914 22:28:05.202957 2846606 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0914 22:28:05.202981 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 22:28:05.203049 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.224408 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 22:28:05.225979 2846606 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0914 22:28:05.244711 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 22:28:05.244727 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 22:28:05.244797 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.257850 2846606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:28:05.244582 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 22:28:05.244570 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 22:28:05.277644 2846606 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0914 22:28:05.277811 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 22:28:05.279968 2846606 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:28:05.279978 2846606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0914 22:28:05.282792 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.283153 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.289377 2846606 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0914 22:28:05.289392 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:28:05.293413 2846606 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 22:28:05.299849 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 22:28:05.299916 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.300217 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.305849 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 22:28:05.308093 2846606 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 22:28:05.310769 2846606 addons.go:231] Setting addon default-storageclass=true in "addons-909789"
	I0914 22:28:05.313856 2846606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 22:28:05.313928 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.321900 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 22:28:05.324272 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 22:28:05.326434 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 22:28:05.325155 2846606 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0914 22:28:05.325169 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 22:28:05.325212 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:05.330059 2846606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 22:28:05.328358 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.328916 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:05.347772 2846606 out.go:177]   - Using image docker.io/registry:2.8.1
	I0914 22:28:05.349794 2846606 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 22:28:05.349816 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0914 22:28:05.349880 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.332393 2846606 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 22:28:05.367001 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0914 22:28:05.367075 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.370872 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.332402 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 22:28:05.376635 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 22:28:05.382387 2846606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 22:28:05.388607 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 22:28:05.388627 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 22:28:05.388694 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.400388 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.404066 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.424842 2846606 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-909789" context rescaled to 1 replicas
	I0914 22:28:05.424880 2846606 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:28:05.426866 2846606 out.go:177] * Verifying Kubernetes components...
	I0914 22:28:05.432600 2846606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:28:05.458740 2846606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:28:05.464637 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.513732 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.518517 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.530753 2846606 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:28:05.530773 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:28:05.530841 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:05.541820 2846606 node_ready.go:35] waiting up to 6m0s for node "addons-909789" to be "Ready" ...
	I0914 22:28:05.549413 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.566589 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.592682 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:05.702582 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:28:05.717301 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 22:28:05.747067 2846606 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 22:28:05.747091 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 22:28:05.805374 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:28:05.808455 2846606 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 22:28:05.808485 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 22:28:05.837824 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 22:28:05.837848 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 22:28:05.865174 2846606 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 22:28:05.865206 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 22:28:06.006923 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 22:28:06.006949 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 22:28:06.018841 2846606 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 22:28:06.018875 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 22:28:06.021865 2846606 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:28:06.021889 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 22:28:06.048681 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 22:28:06.058037 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 22:28:06.068440 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 22:28:06.068463 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 22:28:06.083001 2846606 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 22:28:06.083036 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 22:28:06.170653 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 22:28:06.178354 2846606 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 22:28:06.178387 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 22:28:06.179208 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 22:28:06.179224 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 22:28:06.195002 2846606 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 22:28:06.195070 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 22:28:06.269084 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 22:28:06.269154 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 22:28:06.315759 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 22:28:06.336706 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 22:28:06.336767 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 22:28:06.344670 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 22:28:06.344729 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 22:28:06.407085 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 22:28:06.407156 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 22:28:06.439424 2846606 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 22:28:06.439484 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 22:28:06.503471 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 22:28:06.503543 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 22:28:06.511781 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 22:28:06.511849 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 22:28:06.559695 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 22:28:06.587353 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 22:28:06.587436 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 22:28:06.614875 2846606 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 22:28:06.614937 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 22:28:06.641649 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 22:28:06.641718 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 22:28:06.667385 2846606 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 22:28:06.667456 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0914 22:28:06.733063 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 22:28:06.733134 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 22:28:06.782930 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 22:28:06.852250 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 22:28:06.852321 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 22:28:06.884033 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 22:28:06.884102 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 22:28:06.934864 2846606 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 22:28:06.934927 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 22:28:07.015627 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 22:28:07.768313 2846606 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.309537946s)
	I0914 22:28:07.768384 2846606 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 22:28:07.991247 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:09.979434 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.276815366s)
	I0914 22:28:09.979529 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.262204345s)
	I0914 22:28:09.979588 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.174182223s)
	I0914 22:28:10.027088 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:11.133975 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.085250431s)
	I0914 22:28:11.134046 2846606 addons.go:467] Verifying addon ingress=true in "addons-909789"
	I0914 22:28:11.136276 2846606 out.go:177] * Verifying ingress addon...
	I0914 22:28:11.134225 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.076159842s)
	I0914 22:28:11.134293 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.963611119s)
	I0914 22:28:11.134318 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.818490775s)
	I0914 22:28:11.134397 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.574629825s)
	I0914 22:28:11.134444 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.351438043s)
	I0914 22:28:11.139613 2846606 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 22:28:11.139667 2846606 addons.go:467] Verifying addon metrics-server=true in "addons-909789"
	I0914 22:28:11.139683 2846606 addons.go:467] Verifying addon registry=true in "addons-909789"
	W0914 22:28:11.139710 2846606 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 22:28:11.140037 2846606 retry.go:31] will retry after 242.737126ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 22:28:11.143985 2846606 out.go:177] * Verifying registry addon...
	I0914 22:28:11.147152 2846606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 22:28:11.153629 2846606 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 22:28:11.153700 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:11.156866 2846606 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 22:28:11.156888 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:11.163193 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:11.166125 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:11.383930 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 22:28:11.405063 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.38933711s)
	I0914 22:28:11.405136 2846606 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-909789"
	I0914 22:28:11.407554 2846606 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 22:28:11.411075 2846606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 22:28:11.433594 2846606 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 22:28:11.433615 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:11.456479 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:11.734437 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:11.781552 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:11.967305 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:12.168812 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:12.179735 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:12.393947 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:12.466046 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:12.667653 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:12.693742 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:12.961876 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:13.179021 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:13.219210 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:13.472189 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:13.504111 2846606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 22:28:13.504263 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:13.510477 2846606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.126465498s)
	I0914 22:28:13.550935 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:13.669667 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:13.670804 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:13.745359 2846606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 22:28:13.798507 2846606 addons.go:231] Setting addon gcp-auth=true in "addons-909789"
	I0914 22:28:13.798594 2846606 host.go:66] Checking if "addons-909789" exists ...
	I0914 22:28:13.799109 2846606 cli_runner.go:164] Run: docker container inspect addons-909789 --format={{.State.Status}}
	I0914 22:28:13.830083 2846606 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 22:28:13.830147 2846606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-909789
	I0914 22:28:13.855360 2846606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36388 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/addons-909789/id_rsa Username:docker}
	I0914 22:28:13.970998 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:13.979219 2846606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0914 22:28:13.981641 2846606 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0914 22:28:13.983469 2846606 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 22:28:13.983493 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 22:28:14.039097 2846606 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 22:28:14.039135 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 22:28:14.128717 2846606 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 22:28:14.128737 2846606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0914 22:28:14.173157 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:14.174232 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:14.190634 2846606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 22:28:14.468617 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:14.671794 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:14.676886 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:14.885389 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:14.966866 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:15.130613 2846606 addons.go:467] Verifying addon gcp-auth=true in "addons-909789"
	I0914 22:28:15.134824 2846606 out.go:177] * Verifying gcp-auth addon...
	I0914 22:28:15.137639 2846606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 22:28:15.163920 2846606 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 22:28:15.163987 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:15.231176 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:15.232866 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:15.239720 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:15.460845 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:15.678111 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:15.691882 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:15.735977 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:15.962987 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:16.167807 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:16.169992 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:16.236017 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:16.463972 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:16.668644 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:16.669954 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:16.735105 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:16.885726 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:16.963529 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:17.167873 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:17.170888 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:17.235653 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:17.461704 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:17.667830 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:17.671350 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:17.734991 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:17.961268 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:18.168455 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:18.171999 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:18.235657 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:18.461667 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:18.668324 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:18.671500 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:18.735195 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:18.885828 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:18.962087 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:19.167809 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:19.170731 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:19.235067 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:19.461513 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:19.668657 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:19.675659 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:19.735389 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:19.961797 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:20.169014 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:20.172737 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:20.237927 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:20.462705 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:20.670606 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:20.674732 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:20.735347 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:20.886463 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:20.962064 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:21.167159 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:21.175572 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:21.235114 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:21.461916 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:21.669648 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:21.670096 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:21.735370 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:21.961938 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:22.168037 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:22.172822 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:22.234908 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:22.462516 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:22.667918 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:22.670235 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:22.735236 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:22.886549 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:22.960997 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:23.168080 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:23.171159 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:23.235135 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:23.461413 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:23.667640 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:23.670013 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:23.735165 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:23.960603 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:24.167960 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:24.170420 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:24.235579 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:24.460993 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:24.667251 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:24.669825 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:24.734903 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:24.961199 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:25.167764 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:25.171406 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:25.235708 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:25.385685 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:25.460925 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:25.669152 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:25.670944 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:25.735742 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:25.961226 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:26.167549 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:26.170082 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:26.235683 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:26.460899 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:26.667861 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:26.670068 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:26.735185 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:26.961222 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:27.168096 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:27.170378 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:27.235621 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:27.461389 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:27.668747 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:27.672083 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:27.735398 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:27.885377 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:27.961221 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:28.167783 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:28.170853 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:28.235276 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:28.460527 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:28.669576 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:28.670706 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:28.734627 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:28.961189 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:29.169439 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:29.171534 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:29.234587 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:29.461300 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:29.669269 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:29.670184 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:29.734427 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:29.885957 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:29.962175 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:30.167496 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:30.172849 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:30.235327 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:30.461144 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:30.668006 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:30.671509 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:30.734829 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:30.961392 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:31.167678 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:31.170216 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:31.234850 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:31.460607 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:31.667392 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:31.669883 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:31.735103 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:31.961595 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:32.168114 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:32.171541 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:32.234899 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:32.387155 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:32.461743 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:32.668301 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:32.671069 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:32.735526 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:32.961045 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:33.168044 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:33.170708 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:33.235431 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:33.461518 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:33.667875 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:33.670383 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:33.734915 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:33.960889 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:34.167773 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:34.170289 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:34.234872 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:34.461155 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:34.667796 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:34.678440 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:34.734937 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:34.886832 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:34.961598 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:35.168697 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:35.171168 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:35.235358 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:35.466318 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:35.668774 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:35.671487 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:35.734488 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:35.961232 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:36.169077 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:36.171324 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:36.234874 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:36.461027 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:36.668932 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:36.670476 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:36.734617 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:36.961141 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:37.167268 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:37.169751 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:37.235185 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:37.385623 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:37.461286 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:37.668859 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:37.670796 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:37.734783 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:37.961580 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:38.167604 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:38.171130 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:38.235485 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:38.461293 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:38.668192 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:38.670344 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:38.735414 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:38.961159 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:39.168157 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:39.171877 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:39.234955 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:39.386012 2846606 node_ready.go:58] node "addons-909789" has status "Ready":"False"
	I0914 22:28:39.460738 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:39.667468 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:39.669933 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:39.735019 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:39.960891 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:40.168019 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:40.171325 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:40.234814 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:40.461114 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:40.667504 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:40.669566 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:40.734848 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:40.961722 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:41.168816 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:41.170728 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:41.235217 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:41.460936 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:41.667838 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:41.670255 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:41.735774 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:41.918068 2846606 node_ready.go:49] node "addons-909789" has status "Ready":"True"
	I0914 22:28:41.918094 2846606 node_ready.go:38] duration metric: took 36.376237206s waiting for node "addons-909789" to be "Ready" ...
	I0914 22:28:41.918106 2846606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:28:41.966549 2846606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4f5c5" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:42.003089 2846606 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 22:28:42.003115 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:42.205209 2846606 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 22:28:42.205279 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:42.206331 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:42.292870 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:42.481356 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:42.688644 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:42.689513 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:42.736507 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:42.962820 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:43.105967 2846606 pod_ready.go:92] pod "coredns-5dd5756b68-4f5c5" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.105994 2846606 pod_ready.go:81] duration metric: took 1.139408451s waiting for pod "coredns-5dd5756b68-4f5c5" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.106016 2846606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.111154 2846606 pod_ready.go:92] pod "etcd-addons-909789" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.111178 2846606 pod_ready.go:81] duration metric: took 5.154631ms waiting for pod "etcd-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.111192 2846606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.117199 2846606 pod_ready.go:92] pod "kube-apiserver-addons-909789" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.117224 2846606 pod_ready.go:81] duration metric: took 6.024777ms waiting for pod "kube-apiserver-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.117236 2846606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.127292 2846606 pod_ready.go:92] pod "kube-controller-manager-addons-909789" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.127316 2846606 pod_ready.go:81] duration metric: took 10.072751ms waiting for pod "kube-controller-manager-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.127330 2846606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nprlc" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.169145 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:43.175138 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:43.234814 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:43.463432 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:43.486591 2846606 pod_ready.go:92] pod "kube-proxy-nprlc" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.486616 2846606 pod_ready.go:81] duration metric: took 359.276266ms waiting for pod "kube-proxy-nprlc" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.486628 2846606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.667785 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:43.671822 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:43.738102 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:43.886891 2846606 pod_ready.go:92] pod "kube-scheduler-addons-909789" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:43.886926 2846606 pod_ready.go:81] duration metric: took 400.285518ms waiting for pod "kube-scheduler-addons-909789" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.886937 2846606 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:43.976707 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:44.167961 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:44.171948 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:44.235557 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:44.462493 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:44.668609 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:44.674138 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:44.734855 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:44.961675 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:45.169025 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:45.170208 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:45.234739 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:45.470974 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:45.668153 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:45.672815 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:45.735242 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:45.967596 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:46.169092 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:46.173472 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:46.200582 2846606 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:28:46.235502 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:46.463262 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:46.669183 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:46.673772 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:46.736031 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:46.963448 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:47.171339 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:47.173322 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:47.235826 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:47.463039 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:47.684107 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:47.685727 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:47.736227 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:47.966638 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:48.168974 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:48.177244 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:48.236864 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:48.465125 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:48.674221 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:48.675496 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:48.696365 2846606 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:28:48.739859 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:48.963011 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:49.172610 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:49.182260 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:49.235092 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:49.462290 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:49.668585 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:49.671192 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:49.734930 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:49.962378 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:50.168788 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:50.171594 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:50.234822 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:50.462663 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:50.669272 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:50.673314 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:50.701214 2846606 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace has status "Ready":"False"
	I0914 22:28:50.734600 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:50.963117 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:51.168617 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:51.174564 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:51.235281 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:51.462640 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:51.668292 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:51.672907 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:51.736564 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:51.973960 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:52.169643 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:52.178923 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:52.240838 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:52.480328 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:52.675240 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:52.678771 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:52.707254 2846606 pod_ready.go:92] pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace has status "Ready":"True"
	I0914 22:28:52.707323 2846606 pod_ready.go:81] duration metric: took 8.820376604s waiting for pod "metrics-server-7c66d45ddc-dbcdr" in "kube-system" namespace to be "Ready" ...
	I0914 22:28:52.707357 2846606 pod_ready.go:38] duration metric: took 10.789238375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:28:52.707399 2846606 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:28:52.707496 2846606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:28:52.728624 2846606 api_server.go:72] duration metric: took 47.3037156s to wait for apiserver process to appear ...
	I0914 22:28:52.728698 2846606 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:28:52.728729 2846606 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 22:28:52.737773 2846606 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 22:28:52.739592 2846606 api_server.go:141] control plane version: v1.28.1
	I0914 22:28:52.739653 2846606 api_server.go:131] duration metric: took 10.935937ms to wait for apiserver health ...
	I0914 22:28:52.739665 2846606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:28:52.740604 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:52.751469 2846606 system_pods.go:59] 17 kube-system pods found
	I0914 22:28:52.751538 2846606 system_pods.go:61] "coredns-5dd5756b68-4f5c5" [f921aa97-54d5-4c60-97cb-0ccbd2213691] Running
	I0914 22:28:52.751560 2846606 system_pods.go:61] "csi-hostpath-attacher-0" [eda632b7-1658-4a8d-8dd4-8fa3d0796a2f] Running
	I0914 22:28:52.751581 2846606 system_pods.go:61] "csi-hostpath-resizer-0" [a183520f-8a92-49b4-8b51-de6fdbd339dd] Running
	I0914 22:28:52.751621 2846606 system_pods.go:61] "csi-hostpathplugin-kxpmp" [7c24ce66-709c-49f8-a787-68ada4a9b963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 22:28:52.751648 2846606 system_pods.go:61] "etcd-addons-909789" [701ca861-8491-4439-9308-17ed8c6b4d37] Running
	I0914 22:28:52.751670 2846606 system_pods.go:61] "kindnet-thkz7" [f3c79155-6c7a-4e07-b74a-25570455c5ea] Running
	I0914 22:28:52.751690 2846606 system_pods.go:61] "kube-apiserver-addons-909789" [135e5899-3304-4fee-aaf0-5b0155658f4d] Running
	I0914 22:28:52.751723 2846606 system_pods.go:61] "kube-controller-manager-addons-909789" [09ca32ec-8424-4568-9205-ba6543ce37c1] Running
	I0914 22:28:52.751745 2846606 system_pods.go:61] "kube-ingress-dns-minikube" [685d9d59-5863-40f4-83ea-b9f5700703cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 22:28:52.751765 2846606 system_pods.go:61] "kube-proxy-nprlc" [7011aeaf-b6b3-45f7-b733-5ae706999079] Running
	I0914 22:28:52.751785 2846606 system_pods.go:61] "kube-scheduler-addons-909789" [035ed87c-db4a-4043-b50f-4642fb0863cc] Running
	I0914 22:28:52.751821 2846606 system_pods.go:61] "metrics-server-7c66d45ddc-dbcdr" [317fd3cf-10c3-4c25-a011-9d2e417c4901] Running
	I0914 22:28:52.751850 2846606 system_pods.go:61] "registry-h7plr" [1b440547-fa9f-4c34-b301-34c86b1393ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 22:28:52.751874 2846606 system_pods.go:61] "registry-proxy-cwb2j" [ac0e755d-982d-4449-825f-57a34c959a00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 22:28:52.751900 2846606 system_pods.go:61] "snapshot-controller-58dbcc7b99-2r7l2" [1927d7e2-0535-4db1-803b-6e4a9fbc1c97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 22:28:52.751935 2846606 system_pods.go:61] "snapshot-controller-58dbcc7b99-4hhvm" [ca9367f9-ff0b-4f5a-bac2-01f55eab60b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 22:28:52.751964 2846606 system_pods.go:61] "storage-provisioner" [b4004ec7-bd62-4e41-abac-4f3e16b9c55d] Running
	I0914 22:28:52.751985 2846606 system_pods.go:74] duration metric: took 12.313751ms to wait for pod list to return data ...
	I0914 22:28:52.752006 2846606 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:28:52.758549 2846606 default_sa.go:45] found service account: "default"
	I0914 22:28:52.758616 2846606 default_sa.go:55] duration metric: took 6.580092ms for default service account to be created ...
	I0914 22:28:52.758641 2846606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:28:52.769734 2846606 system_pods.go:86] 17 kube-system pods found
	I0914 22:28:52.769803 2846606 system_pods.go:89] "coredns-5dd5756b68-4f5c5" [f921aa97-54d5-4c60-97cb-0ccbd2213691] Running
	I0914 22:28:52.769825 2846606 system_pods.go:89] "csi-hostpath-attacher-0" [eda632b7-1658-4a8d-8dd4-8fa3d0796a2f] Running
	I0914 22:28:52.769848 2846606 system_pods.go:89] "csi-hostpath-resizer-0" [a183520f-8a92-49b4-8b51-de6fdbd339dd] Running
	I0914 22:28:52.769886 2846606 system_pods.go:89] "csi-hostpathplugin-kxpmp" [7c24ce66-709c-49f8-a787-68ada4a9b963] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 22:28:52.769915 2846606 system_pods.go:89] "etcd-addons-909789" [701ca861-8491-4439-9308-17ed8c6b4d37] Running
	I0914 22:28:52.769937 2846606 system_pods.go:89] "kindnet-thkz7" [f3c79155-6c7a-4e07-b74a-25570455c5ea] Running
	I0914 22:28:52.769958 2846606 system_pods.go:89] "kube-apiserver-addons-909789" [135e5899-3304-4fee-aaf0-5b0155658f4d] Running
	I0914 22:28:52.769995 2846606 system_pods.go:89] "kube-controller-manager-addons-909789" [09ca32ec-8424-4568-9205-ba6543ce37c1] Running
	I0914 22:28:52.770022 2846606 system_pods.go:89] "kube-ingress-dns-minikube" [685d9d59-5863-40f4-83ea-b9f5700703cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0914 22:28:52.770043 2846606 system_pods.go:89] "kube-proxy-nprlc" [7011aeaf-b6b3-45f7-b733-5ae706999079] Running
	I0914 22:28:52.770068 2846606 system_pods.go:89] "kube-scheduler-addons-909789" [035ed87c-db4a-4043-b50f-4642fb0863cc] Running
	I0914 22:28:52.770101 2846606 system_pods.go:89] "metrics-server-7c66d45ddc-dbcdr" [317fd3cf-10c3-4c25-a011-9d2e417c4901] Running
	I0914 22:28:52.770130 2846606 system_pods.go:89] "registry-h7plr" [1b440547-fa9f-4c34-b301-34c86b1393ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 22:28:52.770155 2846606 system_pods.go:89] "registry-proxy-cwb2j" [ac0e755d-982d-4449-825f-57a34c959a00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 22:28:52.770180 2846606 system_pods.go:89] "snapshot-controller-58dbcc7b99-2r7l2" [1927d7e2-0535-4db1-803b-6e4a9fbc1c97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 22:28:52.770216 2846606 system_pods.go:89] "snapshot-controller-58dbcc7b99-4hhvm" [ca9367f9-ff0b-4f5a-bac2-01f55eab60b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 22:28:52.770244 2846606 system_pods.go:89] "storage-provisioner" [b4004ec7-bd62-4e41-abac-4f3e16b9c55d] Running
	I0914 22:28:52.770267 2846606 system_pods.go:126] duration metric: took 11.607043ms to wait for k8s-apps to be running ...
	I0914 22:28:52.770287 2846606 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:28:52.770399 2846606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:28:52.787938 2846606 system_svc.go:56] duration metric: took 17.640426ms WaitForService to wait for kubelet.
	I0914 22:28:52.788009 2846606 kubeadm.go:581] duration metric: took 47.363106818s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:28:52.788044 2846606 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:28:52.791608 2846606 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:28:52.791677 2846606 node_conditions.go:123] node cpu capacity is 2
	I0914 22:28:52.791704 2846606 node_conditions.go:105] duration metric: took 3.63876ms to run NodePressure ...
	I0914 22:28:52.791731 2846606 start.go:228] waiting for startup goroutines ...
	I0914 22:28:52.962454 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:53.168113 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:53.172446 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:53.235240 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:53.462868 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:53.668336 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:53.671229 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:53.734998 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:53.964543 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:54.167970 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:54.171260 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:54.234786 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:54.464098 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:54.668761 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:54.671901 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:54.735393 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:54.962320 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:55.168293 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:55.172786 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:55.235900 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:55.467220 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:55.668073 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:55.670769 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:55.735281 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:55.966136 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:56.167519 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:56.170518 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:56.235252 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:56.462062 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:56.667384 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:56.671404 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:56.735768 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:56.963039 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:57.168549 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:57.171215 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:57.235157 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:57.462427 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:57.668578 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:57.678675 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:57.735668 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:57.962364 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:58.170951 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:58.173387 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:58.234880 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:58.462669 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:58.668049 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:58.670677 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:58.735448 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:58.962655 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:59.168177 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:59.171948 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:59.234873 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:59.461827 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:28:59.669619 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:28:59.671260 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:28:59.735400 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:28:59.963502 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:00.168612 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:00.173925 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:00.234902 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:00.462973 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:00.670981 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:00.671571 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:00.735263 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:00.970939 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:01.172637 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:01.179389 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:01.234934 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:01.465917 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:01.675063 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:01.675366 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:01.736319 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:01.963212 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:02.169331 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:02.172994 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:02.236036 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:02.463161 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:02.668900 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:02.682592 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:02.737791 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:02.963246 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:03.169228 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:03.172308 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:03.235637 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:03.462651 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:03.668385 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:03.671464 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 22:29:03.735130 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:03.962516 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:04.168272 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:04.171628 2846606 kapi.go:107] duration metric: took 53.024473998s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 22:29:04.237861 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:04.462689 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:04.669990 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:04.741373 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:04.963945 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:05.168057 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:05.236008 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:05.463218 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:05.668440 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:05.735417 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:05.962103 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:06.167955 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:06.235610 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:06.462618 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:06.668636 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:06.735677 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:06.962336 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:07.167752 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:07.235628 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:07.465097 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:07.668815 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:07.736331 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:07.963350 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:08.167549 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:08.235073 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:08.463364 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:08.668045 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:08.735620 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:08.964732 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:09.168266 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:09.235510 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:09.461950 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:09.669385 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:09.737512 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:09.962418 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:10.172245 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:10.234766 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:10.461633 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:10.667914 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:10.735157 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:10.967294 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:11.171775 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:11.235180 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:11.462536 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:11.667932 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:11.735487 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:11.962763 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:12.168192 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:12.234656 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:12.474846 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:12.668773 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:12.735317 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:12.961829 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:13.168221 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:13.235396 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:13.462760 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:13.668392 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:13.734985 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:13.962795 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:14.168365 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:14.234992 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:14.463075 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:14.668231 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:14.738188 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:14.962983 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:15.178166 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:15.240049 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:15.465458 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:15.669297 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:15.735440 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:15.975411 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:16.191757 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:16.241964 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:16.463333 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:16.668712 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:16.735375 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:16.967959 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:17.169372 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:17.235373 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:17.463224 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:17.668679 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:17.735506 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:17.963823 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:18.169021 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:18.237795 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:18.463055 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:18.668289 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:18.734775 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:18.962302 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:19.169047 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:19.240296 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:19.462289 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:19.667962 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:19.735586 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:19.962845 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:20.168549 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:20.242868 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:20.462557 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:20.668665 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:20.735359 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:20.962500 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:21.168285 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:21.242511 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:21.462464 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:21.668290 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:21.735900 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:21.962424 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:22.167915 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:22.235482 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:22.463530 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:22.667823 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:22.735305 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:22.962634 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:23.168668 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:23.236470 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:23.465342 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:23.667904 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:23.735619 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:23.963467 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:24.168129 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:24.235641 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:24.462113 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:24.667491 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:24.735305 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:24.962275 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:25.169016 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:25.234905 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:25.465153 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:25.667620 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:25.739368 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:25.971098 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:26.173378 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:26.235304 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:26.462786 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:26.668651 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:26.736192 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:26.963955 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:27.168321 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:27.235534 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:27.462351 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:27.667997 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:27.737292 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:27.962099 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:28.169008 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:28.235904 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:28.462524 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:28.668350 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:28.735227 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:28.961877 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:29.169226 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:29.236452 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:29.462919 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:29.668862 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:29.738822 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:29.962438 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:30.168039 2846606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:29:30.236231 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:30.462598 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:30.667934 2846606 kapi.go:107] duration metric: took 1m19.528317903s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 22:29:30.735513 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:30.963137 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:31.234910 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:31.464555 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:31.735486 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:31.967497 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:32.234954 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:32.462643 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:32.738477 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:32.962333 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:33.238895 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:33.462937 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:33.735686 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:33.962510 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:34.238411 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:34.467645 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:34.734780 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:34.962456 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:35.235725 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:35.462379 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:35.734777 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:35.963736 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:36.235244 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:36.461882 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:36.735544 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:36.962761 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:37.235450 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:37.461785 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:37.735358 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:37.962860 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:38.235580 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:38.462392 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:38.738157 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:38.963676 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:39.235503 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:39.462393 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:39.734775 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:39.962656 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 22:29:40.235151 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:40.462590 2846606 kapi.go:107] duration metric: took 1m29.05151497s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 22:29:40.735270 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:41.235204 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:41.734928 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:42.234501 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:42.735103 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:43.234663 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:43.736828 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:44.234557 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:44.735197 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:45.235911 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:45.734757 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:46.235371 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:46.735285 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:47.235031 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:47.734767 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:48.234787 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:48.734874 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:49.235310 2846606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 22:29:49.734970 2846606 kapi.go:107] duration metric: took 1m34.597328724s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 22:29:49.736845 2846606 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-909789 cluster.
	I0914 22:29:49.738869 2846606 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 22:29:49.740796 2846606 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 22:29:49.742799 2846606 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, inspektor-gadget, ingress-dns, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0914 22:29:49.744767 2846606 addons.go:502] enable addons completed in 1m44.711869942s: enabled=[storage-provisioner cloud-spanner default-storageclass inspektor-gadget ingress-dns metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0914 22:29:49.744803 2846606 start.go:233] waiting for cluster config update ...
	I0914 22:29:49.744819 2846606 start.go:242] writing updated cluster config ...
	I0914 22:29:49.745095 2846606 ssh_runner.go:195] Run: rm -f paused
	I0914 22:29:49.809771 2846606 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:29:49.812003 2846606 out.go:177] * Done! kubectl is now configured to use "addons-909789" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 14 22:32:48 addons-909789 crio[895]: time="2023-09-14 22:32:48.826454632Z" level=info msg="Stopped pod sandbox: 8137c3abd0157ccacd293fe504e5f0062ef42d13253437dbde76936a86360f18" id=a86824ab-79c4-40f8-969e-1af95b49afd0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:48 addons-909789 crio[895]: time="2023-09-14 22:32:48.861447519Z" level=info msg="Removing container: 06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93" id=c14f5920-ee7c-4b86-938d-cc6b7bba1ded name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:48 addons-909789 crio[895]: time="2023-09-14 22:32:48.878388688Z" level=info msg="Removed container 06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93: ingress-nginx/ingress-nginx-controller-798b8b85d7-xt7zt/controller" id=c14f5920-ee7c-4b86-938d-cc6b7bba1ded name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.198655173Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=af46b159-90e9-4cbb-873a-a5a5e8b69ae8 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.198881052Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=af46b159-90e9-4cbb-873a-a5a5e8b69ae8 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.463698407Z" level=info msg="Removing container: d14738137a88a8085d6eff0eb4ef81e88ded34b4f325411902d12496dd3a214e" id=709f1508-caf7-4da8-b23e-7f67ccb6593e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.511102860Z" level=info msg="Removed container d14738137a88a8085d6eff0eb4ef81e88ded34b4f325411902d12496dd3a214e: ingress-nginx/ingress-nginx-admission-patch-zghmp/patch" id=709f1508-caf7-4da8-b23e-7f67ccb6593e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.512811519Z" level=info msg="Removing container: ffaa09d3e9c3b21e917689c9d38b451418c94f14d1579da12b7bd066f5a41053" id=3478e60f-070e-4850-9a65-76e92988a0fd name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.536236793Z" level=info msg="Removed container ffaa09d3e9c3b21e917689c9d38b451418c94f14d1579da12b7bd066f5a41053: ingress-nginx/ingress-nginx-admission-create-9srvk/create" id=3478e60f-070e-4850-9a65-76e92988a0fd name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.537609395Z" level=info msg="Stopping pod sandbox: bb1de60b0035d5f999e415fffa62ef421bd3de5c1b3ce4ab4d6b3bb6049b3e0f" id=43f43b10-73dc-4614-bd70-e11c48470aa8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.537643709Z" level=info msg="Stopped pod sandbox (already stopped): bb1de60b0035d5f999e415fffa62ef421bd3de5c1b3ce4ab4d6b3bb6049b3e0f" id=43f43b10-73dc-4614-bd70-e11c48470aa8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.538011364Z" level=info msg="Removing pod sandbox: bb1de60b0035d5f999e415fffa62ef421bd3de5c1b3ce4ab4d6b3bb6049b3e0f" id=1fdf5b94-94b9-4924-8fda-ee24ae31bc08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.546163553Z" level=info msg="Removed pod sandbox: bb1de60b0035d5f999e415fffa62ef421bd3de5c1b3ce4ab4d6b3bb6049b3e0f" id=1fdf5b94-94b9-4924-8fda-ee24ae31bc08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.546708078Z" level=info msg="Stopping pod sandbox: 8137c3abd0157ccacd293fe504e5f0062ef42d13253437dbde76936a86360f18" id=30a3d040-e643-46e7-9b33-36c907549ae3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.546739069Z" level=info msg="Stopped pod sandbox (already stopped): 8137c3abd0157ccacd293fe504e5f0062ef42d13253437dbde76936a86360f18" id=30a3d040-e643-46e7-9b33-36c907549ae3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.547113551Z" level=info msg="Removing pod sandbox: 8137c3abd0157ccacd293fe504e5f0062ef42d13253437dbde76936a86360f18" id=bef2ed45-6ae7-4a6e-bba7-47807f5e25e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.558371755Z" level=info msg="Removed pod sandbox: 8137c3abd0157ccacd293fe504e5f0062ef42d13253437dbde76936a86360f18" id=bef2ed45-6ae7-4a6e-bba7-47807f5e25e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.559124592Z" level=info msg="Stopping pod sandbox: 33875592ca25298583918bc1cfc0b307250f261351e7f2e41360df456d7de7dd" id=fef24531-4e23-4d2c-8e75-790d4332266e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.559239767Z" level=info msg="Stopped pod sandbox (already stopped): 33875592ca25298583918bc1cfc0b307250f261351e7f2e41360df456d7de7dd" id=fef24531-4e23-4d2c-8e75-790d4332266e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.559586007Z" level=info msg="Removing pod sandbox: 33875592ca25298583918bc1cfc0b307250f261351e7f2e41360df456d7de7dd" id=cfb9c284-0a27-48c7-b430-78a213ec80a7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.573453259Z" level=info msg="Removed pod sandbox: 33875592ca25298583918bc1cfc0b307250f261351e7f2e41360df456d7de7dd" id=cfb9c284-0a27-48c7-b430-78a213ec80a7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.574097936Z" level=info msg="Stopping pod sandbox: 1d2a7231ba7c3b68ce551ffe672291eedbdd3d72c576f3bfe87075025a5eecf6" id=d38ca06f-50bd-41fa-9067-6aad777ebd91 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.574212250Z" level=info msg="Stopped pod sandbox (already stopped): 1d2a7231ba7c3b68ce551ffe672291eedbdd3d72c576f3bfe87075025a5eecf6" id=d38ca06f-50bd-41fa-9067-6aad777ebd91 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.574611650Z" level=info msg="Removing pod sandbox: 1d2a7231ba7c3b68ce551ffe672291eedbdd3d72c576f3bfe87075025a5eecf6" id=f1f1557f-5e16-4ca9-a377-f96184ca260a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 22:32:53 addons-909789 crio[895]: time="2023-09-14 22:32:53.583842600Z" level=info msg="Removed pod sandbox: 1d2a7231ba7c3b68ce551ffe672291eedbdd3d72c576f3bfe87075025a5eecf6" id=f1f1557f-5e16-4ca9-a377-f96184ca260a name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d9068311b282       a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb                                               9 seconds ago       Exited              hello-world-app           2                   d533667e6e5f5       hello-world-app-5d77478584-s6274
	b7f60c20f600c       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                2 minutes ago       Running             nginx                     0                   027b2dd0e3570       nginx
	c847379577523       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          2 minutes ago       Running             headlamp                  0                   46eb6471ad162       headlamp-699c48fb74-c52mw
	290d227356c68       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   3 minutes ago       Running             gcp-auth                  0                   fb507eeb1a347       gcp-auth-d4c87556c-8ltfb
	e2426d162ab74       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               4 minutes ago       Running             storage-provisioner       0                   4328276b0a0ec       storage-provisioner
	5dcc17a13097f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               4 minutes ago       Running             coredns                   0                   98af353f46a47       coredns-5dd5756b68-4f5c5
	64cc295cc0e1f       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052             4 minutes ago       Running             kindnet-cni               0                   f9cc43119b110       kindnet-thkz7
	65580a145a0b3       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                               4 minutes ago       Running             kube-proxy                0                   353445da5e739       kube-proxy-nprlc
	a616ca0ed45dd       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                               5 minutes ago       Running             kube-scheduler            0                   0f42234dc50b5       kube-scheduler-addons-909789
	37c6e92b59855       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                               5 minutes ago       Running             kube-controller-manager   0                   ed4026b2a86c8       kube-controller-manager-addons-909789
	a5f8775b04e82       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                               5 minutes ago       Running             kube-apiserver            0                   787bf4c8bda83       kube-apiserver-addons-909789
	641fae2c3f19e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               5 minutes ago       Running             etcd                      0                   03e990d20e377       etcd-addons-909789
	
	* 
	* ==> coredns [5dcc17a13097fe9df8a7d11bfeb91b0890586f05340828395a792d2ba24e9326] <==
	* [INFO] 10.244.0.16:58977 - 12347 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039729s
	[INFO] 10.244.0.16:53723 - 1600 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001998701s
	[INFO] 10.244.0.16:58977 - 64118 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001082797s
	[INFO] 10.244.0.16:58977 - 28055 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001156397s
	[INFO] 10.244.0.16:53723 - 52197 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001897368s
	[INFO] 10.244.0.16:58977 - 8450 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000174449s
	[INFO] 10.244.0.16:53723 - 9468 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065649s
	[INFO] 10.244.0.16:56495 - 38190 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038539s
	[INFO] 10.244.0.16:34220 - 50251 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000255138s
	[INFO] 10.244.0.16:34220 - 1151 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073608s
	[INFO] 10.244.0.16:34220 - 29628 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055786s
	[INFO] 10.244.0.16:56495 - 10934 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063746s
	[INFO] 10.244.0.16:34220 - 38269 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000108898s
	[INFO] 10.244.0.16:34220 - 11873 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058691s
	[INFO] 10.244.0.16:34220 - 11591 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053046s
	[INFO] 10.244.0.16:56495 - 14792 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074462s
	[INFO] 10.244.0.16:56495 - 38486 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000219331s
	[INFO] 10.244.0.16:34220 - 35157 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00130468s
	[INFO] 10.244.0.16:56495 - 6037 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091315s
	[INFO] 10.244.0.16:56495 - 4841 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083585s
	[INFO] 10.244.0.16:34220 - 39966 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000986362s
	[INFO] 10.244.0.16:34220 - 46329 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070671s
	[INFO] 10.244.0.16:56495 - 36554 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000908693s
	[INFO] 10.244.0.16:56495 - 59595 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000928927s
	[INFO] 10.244.0.16:56495 - 32753 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000469s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-909789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-909789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=addons-909789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_27_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-909789
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:27:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-909789
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:32:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:30:27 +0000   Thu, 14 Sep 2023 22:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:30:27 +0000   Thu, 14 Sep 2023 22:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:30:27 +0000   Thu, 14 Sep 2023 22:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:30:27 +0000   Thu, 14 Sep 2023 22:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-909789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 af6d5884464b4417ba5e4fbc23c0fef0
	  System UUID:                c048a2d9-675f-4a48-8100-8b34d6e9c209
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-s6274         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-8ltfb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  headlamp                    headlamp-699c48fb74-c52mw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 coredns-5dd5756b68-4f5c5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m47s
	  kube-system                 etcd-addons-909789                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-thkz7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m47s
	  kube-system                 kube-apiserver-addons-909789             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-addons-909789    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-nprlc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-scheduler-addons-909789             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m43s  kube-proxy       
	  Normal  Starting                 5m1s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m1s   kubelet          Node addons-909789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s   kubelet          Node addons-909789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s   kubelet          Node addons-909789 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m50s  node-controller  Node addons-909789 event: Registered Node addons-909789 in Controller
	  Normal  NodeReady                4m13s  kubelet          Node addons-909789 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001029] FS-Cache: O-key=[8] 'cd6e3b0000000000'
	[  +0.000669] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.000997] FS-Cache: N-key=[8] 'cd6e3b0000000000'
	[  +0.006953] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000939] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001005] FS-Cache: O-key=[8] 'cd6e3b0000000000'
	[  +0.000675] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=000000005c7627e1
	[  +0.001079] FS-Cache: N-key=[8] 'cd6e3b0000000000'
	[  +2.799703] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000009b736179
	[  +0.001065] FS-Cache: O-key=[8] 'cc6e3b0000000000'
	[  +0.000703] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000902] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000db0d2821
	[  +0.001009] FS-Cache: N-key=[8] 'cc6e3b0000000000'
	[  +0.303117] FS-Cache: Duplicate cookie detected
	[  +0.000766] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=00000000a64614a0
	[  +0.001160] FS-Cache: O-key=[8] 'd26e3b0000000000'
	[  +0.000738] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000035f5a14f
	[  +0.001139] FS-Cache: N-key=[8] 'd26e3b0000000000'
	
	* 
	* ==> etcd [641fae2c3f19e78d4268e56cbd95e40d8071a64a66588bb43c0c104a95507899] <==
	* {"level":"info","ts":"2023-09-14T22:27:46.353225Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:27:46.353427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:27:46.354339Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-14T22:27:46.360418Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:27:46.360577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:27:46.360631Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:27:46.364577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:27:46.364646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:28:05.599853Z","caller":"traceutil/trace.go:171","msg":"trace[1934749903] linearizableReadLoop","detail":"{readStateIndex:313; appliedIndex:312; }","duration":"149.591225ms","start":"2023-09-14T22:28:05.450238Z","end":"2023-09-14T22:28:05.599829Z","steps":["trace[1934749903] 'read index received'  (duration: 51.279796ms)","trace[1934749903] 'applied index is now lower than readState.Index'  (duration: 98.310879ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-14T22:28:05.600164Z","caller":"traceutil/trace.go:171","msg":"trace[1907373571] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"150.419122ms","start":"2023-09-14T22:28:05.449735Z","end":"2023-09-14T22:28:05.600154Z","steps":["trace[1907373571] 'process raft request'  (duration: 51.936148ms)","trace[1907373571] 'compare'  (duration: 98.07024ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:28:05.600414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.186524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2023-09-14T22:28:05.600675Z","caller":"traceutil/trace.go:171","msg":"trace[929484842] range","detail":"{range_begin:/registry/serviceaccounts/default/; range_end:/registry/serviceaccounts/default0; response_count:1; response_revision:305; }","duration":"150.457252ms","start":"2023-09-14T22:28:05.450205Z","end":"2023-09-14T22:28:05.600663Z","steps":["trace[929484842] 'agreement among raft nodes before linearized reading'  (duration: 150.145744ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:28:05.601806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.360866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/addons-909789\" ","response":"range_response_count:1 size:679"}
	{"level":"info","ts":"2023-09-14T22:28:05.603223Z","caller":"traceutil/trace.go:171","msg":"trace[1949139460] range","detail":"{range_begin:/registry/csinodes/addons-909789; range_end:; response_count:1; response_revision:306; }","duration":"152.775971ms","start":"2023-09-14T22:28:05.450436Z","end":"2023-09-14T22:28:05.603212Z","steps":["trace[1949139460] 'agreement among raft nodes before linearized reading'  (duration: 151.344407ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:28:05.60347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.22247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-09-14T22:28:05.604001Z","caller":"traceutil/trace.go:171","msg":"trace[1343142289] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:306; }","duration":"153.723187ms","start":"2023-09-14T22:28:05.450264Z","end":"2023-09-14T22:28:05.603988Z","steps":["trace[1343142289] 'agreement among raft nodes before linearized reading'  (duration: 151.202244ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-14T22:28:05.603859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.432069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/addons-909789\" ","response":"range_response_count:1 size:556"}
	{"level":"info","ts":"2023-09-14T22:28:05.606901Z","caller":"traceutil/trace.go:171","msg":"trace[923357873] range","detail":"{range_begin:/registry/leases/kube-node-lease/addons-909789; range_end:; response_count:1; response_revision:306; }","duration":"156.472477ms","start":"2023-09-14T22:28:05.450417Z","end":"2023-09-14T22:28:05.60689Z","steps":["trace[923357873] 'agreement among raft nodes before linearized reading'  (duration: 153.402546ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:28:08.41637Z","caller":"traceutil/trace.go:171","msg":"trace[752314885] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"121.504559ms","start":"2023-09-14T22:28:08.294849Z","end":"2023-09-14T22:28:08.416354Z","steps":["trace[752314885] 'process raft request'  (duration: 22.010779ms)","trace[752314885] 'compare'  (duration: 11.204089ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-14T22:28:08.417168Z","caller":"traceutil/trace.go:171","msg":"trace[1225409697] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"122.218726ms","start":"2023-09-14T22:28:08.294937Z","end":"2023-09-14T22:28:08.417155Z","steps":["trace[1225409697] 'process raft request'  (duration: 121.36024ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T22:28:08.417293Z","caller":"traceutil/trace.go:171","msg":"trace[1385090267] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"122.188317ms","start":"2023-09-14T22:28:08.295095Z","end":"2023-09-14T22:28:08.417284Z","steps":["trace[1385090267] 'read index received'  (duration: 17.451857ms)","trace[1385090267] 'applied index is now lower than readState.Index'  (duration: 104.735689ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:28:08.438919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.82363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2023-09-14T22:28:08.43897Z","caller":"traceutil/trace.go:171","msg":"trace[2043031157] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:343; }","duration":"143.88224ms","start":"2023-09-14T22:28:08.295077Z","end":"2023-09-14T22:28:08.438959Z","steps":["trace[2043031157] 'agreement among raft nodes before linearized reading'  (duration: 122.290947ms)","trace[2043031157] 'range keys from in-memory index tree'  (duration: 20.361739ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-14T22:28:09.999572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.32646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-14T22:28:10.000227Z","caller":"traceutil/trace.go:171","msg":"trace[1345467174] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:0; response_revision:465; }","duration":"102.992453ms","start":"2023-09-14T22:28:09.89722Z","end":"2023-09-14T22:28:10.000213Z","steps":["trace[1345467174] 'agreement among raft nodes before linearized reading'  (duration: 96.059033ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [290d227356c6889dc479181d5589a2348eccadc115396912ed77b5992cff486c] <==
	* 2023/09/14 22:29:49 GCP Auth Webhook started!
	2023/09/14 22:29:56 Ready to marshal response ...
	2023/09/14 22:29:56 Ready to write response ...
	2023/09/14 22:29:57 Ready to marshal response ...
	2023/09/14 22:29:57 Ready to write response ...
	2023/09/14 22:29:57 Ready to marshal response ...
	2023/09/14 22:29:57 Ready to write response ...
	2023/09/14 22:30:00 Ready to marshal response ...
	2023/09/14 22:30:00 Ready to write response ...
	2023/09/14 22:30:07 Ready to marshal response ...
	2023/09/14 22:30:07 Ready to write response ...
	2023/09/14 22:30:16 Ready to marshal response ...
	2023/09/14 22:30:16 Ready to write response ...
	2023/09/14 22:30:37 Ready to marshal response ...
	2023/09/14 22:30:37 Ready to write response ...
	2023/09/14 22:32:28 Ready to marshal response ...
	2023/09/14 22:32:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:32:54 up 22:15,  0 users,  load average: 0.41, 1.33, 1.92
	Linux addons-909789 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [64cc295cc0e1ffb064a72c704409bedfc718409431a1105fc2947236859d5fc6] <==
	* I0914 22:30:51.553358       1 main.go:227] handling current node
	I0914 22:31:01.566312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:01.566341       1 main.go:227] handling current node
	I0914 22:31:11.570884       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:11.570911       1 main.go:227] handling current node
	I0914 22:31:21.583878       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:21.583910       1 main.go:227] handling current node
	I0914 22:31:31.588368       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:31.588394       1 main.go:227] handling current node
	I0914 22:31:41.600176       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:41.600205       1 main.go:227] handling current node
	I0914 22:31:51.621438       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:31:51.621478       1 main.go:227] handling current node
	I0914 22:32:01.633579       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:01.633608       1 main.go:227] handling current node
	I0914 22:32:11.638136       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:11.638164       1 main.go:227] handling current node
	I0914 22:32:21.649306       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:21.649681       1 main.go:227] handling current node
	I0914 22:32:31.653296       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:31.653324       1 main.go:227] handling current node
	I0914 22:32:41.666030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:41.666060       1 main.go:227] handling current node
	I0914 22:32:51.669857       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:32:51.669882       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a5f8775b04e82f737c568366f844eda4fbbd90ae49c7d722f23f9a757e83a959] <==
	* E0914 22:30:54.482754       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 22:30:54.482837       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 22:30:54.484882       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0914 22:30:54.485730       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0914 22:30:55.412763       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 22:30:55.467177       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 22:30:55.476575       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0914 22:31:00.268329       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	I0914 22:31:06.089077       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0914 22:31:06.108950       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0914 22:31:06.135489       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	E0914 22:31:06.135519       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	W0914 22:31:07.124244       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0914 22:31:53.291099       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 22:31:53.291123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:31:53.291167       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:31:53.291176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 22:32:28.595279       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.189.223"}
	E0914 22:32:44.866397       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400763cff0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4008d0fea0), ResponseWriter:(*httpsnoop.rw)(0x4008d0fea0), Flusher:(*httpsnoop.rw)(0x4008d0fea0), CloseNotifier:(*httpsnoop.rw)(0x4008d0fea0), Pusher:(*httpsnoop.rw)(0x4008d0fea0)}}, encoder:(*versioning.codec)(0x4007605400), memAllocator:(*runtime.Allocator)(0x400530ac90)})
	E0914 22:32:53.291582       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0914 22:32:53.291611       1 handler_proxy.go:93] no RequestInfo found in the context
	E0914 22:32:53.291652       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 22:32:53.291662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [37c6e92b59855e0182aa4f072b5138b3790bad76d51da6e347094014f3c36d65] <==
	* W0914 22:31:41.024077       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:31:41.024113       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 22:31:53.231475       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:31:53.231509       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 22:32:14.280165       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:32:14.280197       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 22:32:17.797528       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:32:17.797642       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 22:32:24.898781       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:32:24.898816       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0914 22:32:25.055978       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 22:32:25.056012       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0914 22:32:28.308429       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0914 22:32:28.336224       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-s6274"
	I0914 22:32:28.351712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.936951ms"
	I0914 22:32:28.368417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.282528ms"
	I0914 22:32:28.373994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.913µs"
	I0914 22:32:28.374308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.552µs"
	I0914 22:32:30.829434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.309µs"
	I0914 22:32:31.846928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="95.753µs"
	I0914 22:32:32.829531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="97.148µs"
	I0914 22:32:44.885667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.06µs"
	I0914 22:32:45.580815       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0914 22:32:45.585970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="16.837µs"
	I0914 22:32:45.592666       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	* 
	* ==> kube-proxy [65580a145a0b38805d5807f53147fe8d31254c1924b780486d6c09a565cfb537] <==
	* I0914 22:28:10.860881       1 server_others.go:69] "Using iptables proxy"
	I0914 22:28:10.927465       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0914 22:28:11.119701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 22:28:11.137102       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:28:11.137202       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 22:28:11.137234       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 22:28:11.137333       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:28:11.137571       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:28:11.137633       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:28:11.140634       1 config.go:188] "Starting service config controller"
	I0914 22:28:11.140715       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:28:11.140762       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:28:11.140852       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:28:11.141383       1 config.go:315] "Starting node config controller"
	I0914 22:28:11.141390       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:28:11.241215       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:28:11.241300       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:28:11.241553       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a616ca0ed45ddae79dbc9b91f6e3d3dcae69a71e94a49d4a764567b94b5942a1] <==
	* W0914 22:27:50.037698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:27:50.037709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 22:27:50.037732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 22:27:50.037741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 22:27:50.037747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:27:50.037756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0914 22:27:50.037780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:27:50.037791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 22:27:50.037795       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:27:50.037804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 22:27:50.037829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:27:50.037839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0914 22:27:50.037841       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:27:50.037849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 22:27:50.050378       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:27:50.050469       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:27:50.890521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:27:50.890635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 22:27:50.916370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 22:27:50.916407       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0914 22:27:50.969044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 22:27:50.969164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 22:27:51.344806       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:27:51.344839       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 22:27:53.605503       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 14 22:32:47 addons-909789 kubelet[1360]: I0914 22:32:47.145559    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="831416de-549b-4ecd-a7c9-6fca3082fc7b" path="/var/lib/kubelet/pods/831416de-549b-4ecd-a7c9-6fca3082fc7b/volumes"
	Sep 14 22:32:47 addons-909789 kubelet[1360]: I0914 22:32:47.145971    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d97eee1c-bd25-4856-9d0d-404cb005cc3d" path="/var/lib/kubelet/pods/d97eee1c-bd25-4856-9d0d-404cb005cc3d/volumes"
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.860122    1360 scope.go:117] "RemoveContainer" containerID="06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93"
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.878625    1360 scope.go:117] "RemoveContainer" containerID="06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93"
	Sep 14 22:32:48 addons-909789 kubelet[1360]: E0914 22:32:48.878997    1360 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93\": container with ID starting with 06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93 not found: ID does not exist" containerID="06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93"
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.879043    1360 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93"} err="failed to get container status \"06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93\": rpc error: code = NotFound desc = could not find container \"06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93\": container with ID starting with 06cf1c9a34e8ca6248df3c91a24b48a7e63968b4f9ab1798efab7f3029b0fd93 not found: ID does not exist"
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.961119    1360 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dd56fb2-6c24-4146-a922-e2665f2deae6-webhook-cert\") pod \"0dd56fb2-6c24-4146-a922-e2665f2deae6\" (UID: \"0dd56fb2-6c24-4146-a922-e2665f2deae6\") "
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.961189    1360 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp64n\" (UniqueName: \"kubernetes.io/projected/0dd56fb2-6c24-4146-a922-e2665f2deae6-kube-api-access-tp64n\") pod \"0dd56fb2-6c24-4146-a922-e2665f2deae6\" (UID: \"0dd56fb2-6c24-4146-a922-e2665f2deae6\") "
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.964677    1360 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd56fb2-6c24-4146-a922-e2665f2deae6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0dd56fb2-6c24-4146-a922-e2665f2deae6" (UID: "0dd56fb2-6c24-4146-a922-e2665f2deae6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 22:32:48 addons-909789 kubelet[1360]: I0914 22:32:48.965304    1360 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd56fb2-6c24-4146-a922-e2665f2deae6-kube-api-access-tp64n" (OuterVolumeSpecName: "kube-api-access-tp64n") pod "0dd56fb2-6c24-4146-a922-e2665f2deae6" (UID: "0dd56fb2-6c24-4146-a922-e2665f2deae6"). InnerVolumeSpecName "kube-api-access-tp64n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 22:32:49 addons-909789 kubelet[1360]: I0914 22:32:49.062351    1360 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tp64n\" (UniqueName: \"kubernetes.io/projected/0dd56fb2-6c24-4146-a922-e2665f2deae6-kube-api-access-tp64n\") on node \"addons-909789\" DevicePath \"\""
	Sep 14 22:32:49 addons-909789 kubelet[1360]: I0914 22:32:49.062392    1360 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dd56fb2-6c24-4146-a922-e2665f2deae6-webhook-cert\") on node \"addons-909789\" DevicePath \"\""
	Sep 14 22:32:49 addons-909789 kubelet[1360]: I0914 22:32:49.145382    1360 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0dd56fb2-6c24-4146-a922-e2665f2deae6" path="/var/lib/kubelet/pods/0dd56fb2-6c24-4146-a922-e2665f2deae6/volumes"
	Sep 14 22:32:53 addons-909789 kubelet[1360]: W0914 22:32:53.306671    1360 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.311994    1360 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1, memory: /docker/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/system.slice/kubelet.service"
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.323325    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/394b7ed703577e0d27c38a12bd3d4d36594dead5a74298e9a304b471c3bd595b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/394b7ed703577e0d27c38a12bd3d4d36594dead5a74298e9a304b471c3bd595b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.325013    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61d4cb88805e58029f0422bcac6626512006b0d05ead51f6dae81a1454744810/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61d4cb88805e58029f0422bcac6626512006b0d05ead51f6dae81a1454744810/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.328612    1360 manager.go:1106] Failed to create existing container: /docker/775b74b83176c7f865a718b1ec95a0339437bfbe44c4b733b0959221611510a1/crio-4ff4fabaea1bfda5dd3e0b632fdb8219fa711408cfb2ad547c815c489b277c6c: Error finding container 4ff4fabaea1bfda5dd3e0b632fdb8219fa711408cfb2ad547c815c489b277c6c: Status 404 returned error can't find the container with id 4ff4fabaea1bfda5dd3e0b632fdb8219fa711408cfb2ad547c815c489b277c6c
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.332001    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7102ad667cb875509de0ef47adca4e84865ab5a104a0cfb11ccc638e1aac68b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7102ad667cb875509de0ef47adca4e84865ab5a104a0cfb11ccc638e1aac68b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.342899    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/afadeb35ccf78850b805f80bb1765644ebe0222aaead874c6746a731178f4218/diff" to get inode usage: stat /var/lib/containers/storage/overlay/afadeb35ccf78850b805f80bb1765644ebe0222aaead874c6746a731178f4218/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.359284    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/394b7ed703577e0d27c38a12bd3d4d36594dead5a74298e9a304b471c3bd595b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/394b7ed703577e0d27c38a12bd3d4d36594dead5a74298e9a304b471c3bd595b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.359309    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61d4cb88805e58029f0422bcac6626512006b0d05ead51f6dae81a1454744810/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61d4cb88805e58029f0422bcac6626512006b0d05ead51f6dae81a1454744810/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: E0914 22:32:53.359324    1360 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/beb2c6d70c661373251bd6a41bb56f5cab236b4da05b557d3e233cb18810a0fc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/beb2c6d70c661373251bd6a41bb56f5cab236b4da05b557d3e233cb18810a0fc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 14 22:32:53 addons-909789 kubelet[1360]: I0914 22:32:53.462399    1360 scope.go:117] "RemoveContainer" containerID="d14738137a88a8085d6eff0eb4ef81e88ded34b4f325411902d12496dd3a214e"
	Sep 14 22:32:53 addons-909789 kubelet[1360]: I0914 22:32:53.511447    1360 scope.go:117] "RemoveContainer" containerID="ffaa09d3e9c3b21e917689c9d38b451418c94f14d1579da12b7bd066f5a41053"
	
	* 
	* ==> storage-provisioner [e2426d162ab74891c9ffd6df29845208e83584ca80311c583791b7f24ac81e24] <==
	* I0914 22:28:42.702692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:28:42.721203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:28:42.722832       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:28:42.732287       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:28:42.736818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-909789_76e8f05f-eae7-427f-9b62-05f9fe496450!
	I0914 22:28:42.733177       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c8cebc80-e96c-4022-8a18-5e0789c91bb3", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-909789_76e8f05f-eae7-427f-9b62-05f9fe496450 became leader
	I0914 22:28:42.838081       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-909789_76e8f05f-eae7-427f-9b62-05f9fe496450!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-909789 -n addons-909789
helpers_test.go:261: (dbg) Run:  kubectl --context addons-909789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-438037 addons enable ingress --alsologtostderr -v=5
E0914 22:39:49.832880 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:40:17.515493 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:41:42.927265 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:42.932598 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:42.942943 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:42.963387 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:43.004429 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:43.084700 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:43.245123 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:43.565753 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:44.206033 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:45.486489 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:48.047348 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:41:53.167796 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:42:03.408353 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:42:23.889271 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:43:04.850219 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:44:26.770793 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 22:44:49.832811 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-438037 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m1.064522812s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:39:30.975096 2877155 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:39:30.978321 2877155 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:39:30.978335 2877155 out.go:309] Setting ErrFile to fd 2...
	I0914 22:39:30.978342 2877155 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:39:30.978731 2877155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:39:30.979859 2877155 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:39:30.979907 2877155 addons.go:594] checking whether the cluster is paused
	I0914 22:39:30.980417 2877155 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:39:30.980462 2877155 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:39:30.981038 2877155 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:39:31.002081 2877155 ssh_runner.go:195] Run: systemctl --version
	I0914 22:39:31.002142 2877155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:39:31.019630 2877155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:39:31.118023 2877155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:39:31.118104 2877155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:39:31.160764 2877155 cri.go:89] found id: "cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739"
	I0914 22:39:31.160831 2877155 cri.go:89] found id: "b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba"
	I0914 22:39:31.160850 2877155 cri.go:89] found id: "9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32"
	I0914 22:39:31.160868 2877155 cri.go:89] found id: "3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d"
	I0914 22:39:31.160886 2877155 cri.go:89] found id: "780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543"
	I0914 22:39:31.160922 2877155 cri.go:89] found id: "623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3"
	I0914 22:39:31.160941 2877155 cri.go:89] found id: "81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f"
	I0914 22:39:31.160960 2877155 cri.go:89] found id: "83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5"
	I0914 22:39:31.160979 2877155 cri.go:89] found id: "55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154"
	I0914 22:39:31.161021 2877155 cri.go:89] found id: ""
	I0914 22:39:31.161103 2877155 ssh_runner.go:195] Run: sudo runc list -f json
	I0914 22:39:31.194409 2877155 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d","pid":2139,"status":"running","bundle":"/run/containers/storage/overlay-containers/3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d/userdata","rootfs":"/var/lib/containers/storage/overlay/f1c47adc2f5822c0f647e8985863c2c441f4efc7b9dc2f4d0937bd12f9f06cf0/merged","created":"2023-09-14T22:39:01.349058195Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1cb1d658","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1cb1d658\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:39:01.299157661Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-ft9s6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d5386d34-1bfd-488c-a959-d4847ddb8a76\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-ft9s6_d5386d34-1bfd-488c-a959-d4847ddb8a76/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f1c47adc2f5822c0f647e8985863c2c441f4efc7b9dc2f4d0937bd12f9f06cf0/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-ft9s6_kube-system_d5386d34-1bfd-488c-a959-d4847ddb8a76_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aaaa4ba223b42b43837885887987120a8f96325d1ad80689dde447efa178048b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aaaa4ba223b42b43837885887987120a8f96325d1ad80689dde447efa178048b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-ft9s6_kube-system_d5386d34-1bfd-488c-a959-d4847ddb8a76_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d5386d34-1bfd-488c-a959-d4847ddb8a76/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d5386d34-1bfd-488c-a959-d4847ddb8a76/containers/kindnet-cni/b717260b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d5386d34-1bfd-488c-a959-d4847ddb8a76/volumes/kubernetes.io~secret/kindnet-token-428cg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-ft9s6","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d5386
d34-1bfd-488c-a959-d4847ddb8a76","kubernetes.io/config.seen":"2023-09-14T22:38:57.842355509Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154","pid":1447,"status":"running","bundle":"/run/containers/storage/overlay-containers/55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154/userdata","rootfs":"/var/lib/containers/storage/overlay/84e9c11e1b9b5d721c38ea2088c7d7614967b87109301609311e07b755b2bf90/merged","created":"2023-09-14T22:38:32.916043232Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f9992f48","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f9992f48\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.termi
nationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:38:32.879464486Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-438037\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eaab4fe9e21569f49e974834353f0f0c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-438037_eaab4fe9e21569f49e974834353f0f0c/etcd/0.log","io.kubernetes.
cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/84e9c11e1b9b5d721c38ea2088c7d7614967b87109301609311e07b755b2bf90/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-438037_kube-system_eaab4fe9e21569f49e974834353f0f0c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/45478d10af744d4bb0072b2347c688f726994c5b4477d892f59f90551df547a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"45478d10af744d4bb0072b2347c688f726994c5b4477d892f59f90551df547a4","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-438037_kube-system_eaab4fe9e21569f49e974834353f0f0c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eaab4fe9e21569f49e974834353f0f0c/etc-hosts\",\"readonly\":false,\"propagation\":0,\
"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eaab4fe9e21569f49e974834353f0f0c/containers/etcd/c2958947\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-438037","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eaab4fe9e21569f49e974834353f0f0c","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"eaab4fe9e21569f49e974834353f0f0c","kubernetes.io/config.seen":"2023-09-14T22:38:29.751452160Z","kubernetes.io/config.source":"file"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3","pid":1544,"status":"running","bundle":"/run/containers/storage/overlay-containers/623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3/userdata","rootfs":"/var/lib/containers/storage/overlay/02886acf38fdf0c3878daf70179eef981584528ae3b7a5cdf2c5391651fffefa/merged","created":"2023-09-14T22:38:33.232127094Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGrac
ePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:38:33.176670183Z","io.kubernetes.cri-o.Image":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-438037\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-438037_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/02886acf38fdf0c3878daf70179eef981584528ae3b7a5cdf2c5391651fffefa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-438037_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/197b4ed6dc804053131bfe5c950c31e22a0113f6f851fa89c02e768c0a24037d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"197b4ed6dc804053131bfe5c950c31e22a0113f6f851fa89c02e768c0a24037d","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-438037_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/c4e79125\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-438
037","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-09-14T22:38:29.736914686Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543","pid":2002,"status":"running","bundle":"/run/containers/storage/overlay-containers/780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543/userdata","rootfs":"/var/lib/containers/storage/overlay/88400173c8eccc56e03d2728a0cc0e74942306184ded5b086c7b505d0b8750e6/merged","created":"2023-09-14T22:38:58.583817493Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2ce03b8d","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kuberne
tes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2ce03b8d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:38:58.5112605Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-pro
xy-79mhd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a9cc9c4a-d968-4403-a34b-9ea2c671326f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-79mhd_a9cc9c4a-d968-4403-a34b-9ea2c671326f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/88400173c8eccc56e03d2728a0cc0e74942306184ded5b086c7b505d0b8750e6/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-79mhd_kube-system_a9cc9c4a-d968-4403-a34b-9ea2c671326f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5906684b7acd1c5ec82387f5db7b66075189c0eb16316dabf8bc0fd025dce653/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5906684b7acd1c5ec82387f5db7b66075189c0eb16316dabf8bc0fd025dce653","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-79mhd_kube-system_a9cc9c4a-d968-4403-a34b-9ea2c671326f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","i
o.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a9cc9c4a-d968-4403-a34b-9ea2c671326f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a9cc9c4a-d968-4403-a34b-9ea2c671326f/containers/kube-proxy/29c8faca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a9cc9c4a-d968-4403-a34b-9ea2c671326f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path
\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a9cc9c4a-d968-4403-a34b-9ea2c671326f/volumes/kubernetes.io~secret/kube-proxy-token-c9xcg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-79mhd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a9cc9c4a-d968-4403-a34b-9ea2c671326f","kubernetes.io/config.seen":"2023-09-14T22:38:57.851216811Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f","pid":1546,"status":"running","bundle":"/run/containers/storage/overlay-containers/81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f/userdata","rootfs":"/var/lib/containers/storage/overlay/9a5a3997ea66c1a66a1f7998b8cfd09d4adea2626704e16a076c0c7acae797dc/merged","created":"2023-09-14T22:38:33.221995745Z","annotations":{"io.container.manager":"cri-o",
"io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:38:33.165652271Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9a25a
1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-438037\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-438037_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9a5a3997ea66c1a66a1f7998b8cfd09d4adea2626704e16a076c0c7acae797dc/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-438037_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/211a90213946c3917980c7c422ff5f639fc73625f60d321417cec5d9811b2ddf/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"211a90213946c3917980c7c4
22ff5f639fc73625f60d321417cec5d9811b2ddf","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-438037_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/7167f557\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-438037","io.kubernetes.pod.namespace
":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-09-14T22:38:29.747427589Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5","pid":1491,"status":"running","bundle":"/run/containers/storage/overlay-containers/83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5/userdata","rootfs":"/var/lib/containers/storage/overlay/8b030e91f63de260f74ca5b61b65dcf49156b8deabe62309695a447252b8fbf9/merged","created":"2023-09-14T22:38:33.069241995Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePo
licy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:38:32.999925181Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-438037\",\"io.kubernetes.pod.namespace\":
\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-438037_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8b030e91f63de260f74ca5b61b65dcf49156b8deabe62309695a447252b8fbf9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-438037_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b280add1026b65ba1a44a3b8c9202dd0659c05d850221c9c941d7047764be332/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b280add1026b65ba1a44a3b8c9202dd0659c05d850221c9c941d7047764be332","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-438037_kube-system_49b043cd68fd30a453bdf128db5271f3_
0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/b017a873\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"pro
pagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-438037","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271
f3","kubernetes.io/config.seen":"2023-09-14T22:38:29.743259961Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32","pid":2250,"status":"running","bundle":"/run/containers/storage/overlay-containers/9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32/userdata","rootfs":"/var/lib/containers/storage/overlay/94e798d70a1cc76abf940e2a081c25a821409eb0166384fdba8af2f06dfb4339/merged","created":"2023-09-14T22:39:20.28187211Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c790637a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.
container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c790637a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:39:20.247460721Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o
.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-5vlzt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6e80d32c-0f03-48b3-a30a-21f772c3a5c1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-5vlzt_6e80d32c-0f03-48b3-a30a-21f772c3a5c1/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/94e798d70a1cc76abf940e2a081c25a821409eb0166384fdba8af2f06dfb4339/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-5vlzt_kube-system_6e80d32c-0f03-48b3-a30a-21f772c3a5c1_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c04499f3b2a792f23489d75ca52f59bdec161caee0616e80408dc93035a0ebba/userdata/resolv.conf","io.kubernetes.cri-o.San
dboxID":"c04499f3b2a792f23489d75ca52f59bdec161caee0616e80408dc93035a0ebba","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-5vlzt_kube-system_6e80d32c-0f03-48b3-a30a-21f772c3a5c1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/6e80d32c-0f03-48b3-a30a-21f772c3a5c1/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6e80d32c-0f03-48b3-a30a-21f772c3a5c1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6e80d32c-0f03-48b3-a30a-21f772c3a5c1/containers/coredns/7daf38b9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/v
ar/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6e80d32c-0f03-48b3-a30a-21f772c3a5c1/volumes/kubernetes.io~secret/coredns-token-z7x5l\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-5vlzt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6e80d32c-0f03-48b3-a30a-21f772c3a5c1","kubernetes.io/config.seen":"2023-09-14T22:39:18.969207888Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba","pid":2315,"status":"running","bundle":"/run/containers/storage/overlay-containers/b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba/userdata","rootfs":"/var/lib/containers/storage/overlay/aa3339ce08ef301cf24940e49c4669f846b8e044c75fa485aa3eb1add184c3b7/merged","created":"2023-09-14T22:39:21.374330131Z","annotations":{"io.container.manager":"cri-o","i
o.kubernetes.container.hash":"c790637a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c790637a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessag
ePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:39:21.340098894Z","io.kubernetes.cri-o.IP.0":"10.244.0.3","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-hzd5r\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6df64232-0e4b-4f95-863f-8195e0b19ed6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-hzd5r_6df64232-0e4b-4f95-863f-8195e0b19ed6/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aa3339ce08ef301cf24940e49c4669f846b8e044c75fa485aa3eb1add184c3b7/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-hzd5r_kube-system_6df64232-0e4b-4f95-863f-8195e0b19ed6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ccb2db598c7237510ea74aae5dab70d02056f49ab6d9bfb26bb56e02f9331a6e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ccb2db598c7237510ea74aae5dab70d02056f49ab6d9bfb26bb56e02f9331a6e","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-hzd5r_kube-system_6df64232-0e4b-4f95-863f-8195e0b19ed6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/6df64232-0e4b-4f95-863f-8195e0b19ed6/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_r
elabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6df64232-0e4b-4f95-863f-8195e0b19ed6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6df64232-0e4b-4f95-863f-8195e0b19ed6/containers/coredns/4a0bef25\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6df64232-0e4b-4f95-863f-8195e0b19ed6/volumes/kubernetes.io~secret/coredns-token-z7x5l\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-hzd5r","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6df64232-0e4b-4f95-863f-8195e0b19ed6","kubernetes.io/config.seen":"2023-09-14T22:39:20.970133909Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev",
"id":"cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739","pid":2365,"status":"running","bundle":"/run/containers/storage/overlay-containers/cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739/userdata","rootfs":"/var/lib/containers/storage/overlay/1f32cb99cf0f9189cf2606c922f6b7e2024e60c1178d591dc2bd8caf31bf8a14/merged","created":"2023-09-14T22:39:23.59454156Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a259f2c7","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a259f2c7\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}
","io.kubernetes.cri-o.ContainerID":"cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T22:39:23.538054134Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0a1d1b79-2747-4d8d-8b93-c687e75482f0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_0a1d1b79-2747-4d8d-8b93-c687e75482f0/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPo
int":"/var/lib/containers/storage/overlay/1f32cb99cf0f9189cf2606c922f6b7e2024e60c1178d591dc2bd8caf31bf8a14/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_0a1d1b79-2747-4d8d-8b93-c687e75482f0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0f1b0c92980861aa2e7fc4860cb1c0c6f93a4e44008812fb2055dd6d6b2ca13f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0f1b0c92980861aa2e7fc4860cb1c0c6f93a4e44008812fb2055dd6d6b2ca13f","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_0a1d1b79-2747-4d8d-8b93-c687e75482f0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0a1d1b79-2747-4d8d-8b93-c687e75482f0
/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0a1d1b79-2747-4d8d-8b93-c687e75482f0/containers/storage-provisioner/f5d38f9c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0a1d1b79-2747-4d8d-8b93-c687e75482f0/volumes/kubernetes.io~secret/storage-provisioner-token-tlj7f\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0a1d1b79-2747-4d8d-8b93-c687e75482f0","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"st
orage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-09-14T22:39:20.973909758Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0914 22:39:31.195107 2877155 cri.go:126] list returned 9 containers
	I0914 22:39:31.195123 2877155 cri.go:129] container: {ID:3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d Status:running}
	I0914 22:39:31.195141 2877155 cri.go:135] skipping {3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d running}: state = "running", want "paused"
	I0914 22:39:31.195156 2877155 cri.go:129] container: {ID:55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154 Status:running}
	I0914 22:39:31.195163 2877155 cri.go:135] skipping {55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154 running}: state = "running", want "paused"
	I0914 22:39:31.195170 2877155 cri.go:129] container: {ID:623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3 Status:running}
	I0914 22:39:31.195176 2877155 cri.go:135] skipping {623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3 running}: state = "running", want "paused"
	I0914 22:39:31.195183 2877155 cri.go:129] container: {ID:780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543 Status:running}
	I0914 22:39:31.195192 2877155 cri.go:135] skipping {780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543 running}: state = "running", want "paused"
	I0914 22:39:31.195198 2877155 cri.go:129] container: {ID:81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f Status:running}
	I0914 22:39:31.195205 2877155 cri.go:135] skipping {81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f running}: state = "running", want "paused"
	I0914 22:39:31.195213 2877155 cri.go:129] container: {ID:83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5 Status:running}
	I0914 22:39:31.195220 2877155 cri.go:135] skipping {83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5 running}: state = "running", want "paused"
	I0914 22:39:31.195228 2877155 cri.go:129] container: {ID:9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32 Status:running}
	I0914 22:39:31.195237 2877155 cri.go:135] skipping {9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32 running}: state = "running", want "paused"
	I0914 22:39:31.195245 2877155 cri.go:129] container: {ID:b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba Status:running}
	I0914 22:39:31.195253 2877155 cri.go:135] skipping {b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba running}: state = "running", want "paused"
	I0914 22:39:31.195259 2877155 cri.go:129] container: {ID:cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739 Status:running}
	I0914 22:39:31.195268 2877155 cri.go:135] skipping {cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739 running}: state = "running", want "paused"
	I0914 22:39:31.198341 2877155 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0914 22:39:31.200850 2877155 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:39:31.200869 2877155 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-438037"
	I0914 22:39:31.200878 2877155 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-438037"
	I0914 22:39:31.200932 2877155 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:39:31.201372 2877155 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:39:31.220790 2877155 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0914 22:39:31.223045 2877155 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0914 22:39:31.225342 2877155 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0914 22:39:31.227567 2877155 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 22:39:31.227588 2877155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0914 22:39:31.227741 2877155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:39:31.245048 2877155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:39:31.363586 2877155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 22:39:31.951642 2877155 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-438037"
	I0914 22:39:31.954202 2877155 out.go:177] * Verifying ingress addon...
	I0914 22:39:31.957445 2877155 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:39:31.958279 2877155 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 22:39:31.958846 2877155 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 22:39:31.978565 2877155 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 22:39:31.978636 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:31.982464 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:32.486864 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:32.987371 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:33.486785 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:33.987827 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:34.487255 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:34.986631 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:35.487123 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:35.986990 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:36.487173 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:36.986329 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:37.486587 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:37.986917 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:38.487424 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:38.986302 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:39.486557 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:39.986766 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:40.486923 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:40.986596 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:41.487099 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:41.986424 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:42.486759 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:42.987015 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:43.486312 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:43.986731 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:44.486979 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:44.987835 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:45.486542 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:45.986713 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:46.487197 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:46.986437 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:47.486872 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:47.987227 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:48.486389 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:48.986454 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:49.486326 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:49.986379 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:50.486258 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:50.986784 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:51.487014 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:51.986255 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:52.486505 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:52.986679 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:53.486780 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:53.986955 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:54.487348 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:54.986765 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:55.487400 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:55.986957 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:56.487095 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:56.986162 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:57.486413 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:57.986253 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:58.486389 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:58.986291 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:59.487468 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:39:59.986858 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:00.486986 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:00.986678 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:01.486979 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:01.986731 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:02.487202 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:02.986413 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:03.486322 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:03.986467 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:04.486808 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:04.987213 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:05.486371 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:05.986945 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:06.487020 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:06.986169 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:07.486280 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:07.986528 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:08.486910 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:08.987170 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:09.486613 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:09.987048 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:10.486829 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:10.986737 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:11.486856 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:11.987008 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:12.486194 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:12.986365 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:13.486345 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:13.986619 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:14.486801 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:14.990563 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:15.486890 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:15.987167 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:16.486539 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:16.986768 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:17.487195 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:17.986221 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:18.486715 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:18.987043 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:19.486373 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:19.986180 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:20.486276 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:20.986932 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:21.487098 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:21.986572 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:22.487036 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:22.986679 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:23.486964 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:23.986135 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:24.486239 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:24.986306 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:25.486340 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:25.986726 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:26.486958 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:26.986180 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:27.486254 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:27.986384 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:28.486164 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:28.986361 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:29.486344 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:29.986411 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:30.486297 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:30.987172 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:31.486503 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:31.988820 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:32.487005 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:32.986197 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:33.486367 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:33.986203 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:34.486511 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:34.986894 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:35.487294 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:35.989672 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:36.487100 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:36.986321 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:37.486511 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:37.986558 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:38.486560 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:38.986882 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:39.487295 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:39.986197 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:40.486398 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:40.987246 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:41.486528 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:41.986908 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:42.486186 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:42.986337 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:43.487226 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:43.986460 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:44.486927 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:44.987328 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:45.486377 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:45.986838 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:46.487324 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:46.986434 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:47.486996 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:47.986186 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:48.486386 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:48.986231 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:49.486382 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:49.986222 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:50.486377 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:50.986954 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:51.486935 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:51.987027 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:52.487295 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:52.986408 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:53.486925 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:53.986331 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:54.486275 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:54.986434 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:55.486397 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:55.986940 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:56.487231 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:56.986623 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:57.487971 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:57.987120 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:58.486389 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:58.986124 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:59.486379 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:40:59.986476 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:00.490039 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:00.986715 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:01.486801 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:01.987080 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:02.486405 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:02.986632 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:03.486421 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:03.986799 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:04.487050 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:04.986393 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:05.486642 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:05.989550 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:06.486826 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:06.987104 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:07.487096 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:07.986428 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:08.486857 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:08.987200 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:09.486468 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:09.987232 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:10.486231 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:10.986853 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:11.487385 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:11.988471 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:12.486990 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:12.986326 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:13.486248 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:13.986462 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:14.486901 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:14.987156 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:15.486431 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:15.986816 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:16.487077 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:16.987197 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:17.486431 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:17.986619 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:18.486945 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:18.986893 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:19.487162 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:19.986268 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:20.486802 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:20.986618 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:21.487058 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:21.986716 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:22.487018 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:22.986636 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:23.487150 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:23.986603 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:24.488218 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:24.986512 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:25.486754 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:25.987755 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:26.486429 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:26.986677 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:27.487091 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:27.986938 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:28.487251 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:28.986587 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:29.487379 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:29.986332 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:30.486658 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:30.987006 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:31.486307 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:31.986762 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:32.487185 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:32.986313 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:33.486762 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:33.986902 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:34.487435 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:34.986531 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:35.486689 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:35.987221 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:36.486454 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:36.986380 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:37.486610 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:37.986729 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:38.487131 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:38.986407 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:39.486337 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:39.988108 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:40.486538 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:40.986950 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:41.487345 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:41.987078 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:42.486663 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:42.987153 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:43.486410 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:43.986598 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:44.487075 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:44.987454 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:45.486361 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:45.986989 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:46.486526 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:46.986334 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:47.486772 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:47.987010 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:48.487447 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:48.986898 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:49.486428 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:49.986757 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:50.487074 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:50.986922 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:51.487345 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:51.986708 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:52.487141 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:52.986380 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:53.486366 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:53.986551 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:54.487311 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:54.986617 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:55.487066 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:55.987165 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:56.490532 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:56.986233 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:57.486887 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:57.987061 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:58.486230 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:58.986398 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:59.486475 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:41:59.986560 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:00.486936 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:00.986462 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:01.486674 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:01.987076 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:02.486397 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:02.986751 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:03.487100 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:03.986259 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:04.486695 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:04.986778 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:05.487042 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:05.986499 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:06.486397 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:06.986436 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:07.486869 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:07.987284 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:08.486220 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:08.986263 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:09.486261 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:09.986399 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:10.486296 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:10.986876 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:11.487273 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:11.986756 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:12.487354 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:12.986110 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:13.486227 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:13.991086 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:14.486402 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:14.986744 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:15.486729 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:15.986935 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:16.487561 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:16.986523 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:17.487125 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:17.986569 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:18.486990 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:18.986181 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:19.486396 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:19.986310 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:20.486302 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:20.986761 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:21.487373 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:21.986926 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:22.487542 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:22.986904 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:23.487627 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:23.986891 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:24.487201 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:24.986323 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:25.486775 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:25.987128 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:26.486693 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:26.986748 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:27.487089 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:27.990774 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:28.486283 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:28.986278 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:29.486633 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:29.987114 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:30.486352 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:30.986672 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:31.487137 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:31.986611 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:32.486941 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:32.987271 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:33.486607 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:33.986932 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:34.487395 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:34.986714 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:35.487209 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:35.986556 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:36.489311 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:36.986442 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:37.486267 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:37.986551 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:38.487190 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:38.986413 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:39.486661 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:39.986810 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:40.487109 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:40.986613 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:41.486884 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:41.987499 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:42.487094 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:42.986682 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:43.487125 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:43.986475 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:44.486798 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:44.987116 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:45.486221 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:45.986536 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:46.486779 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:46.987020 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:47.486500 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:47.986291 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:48.486494 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:48.986666 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:49.486932 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:49.988759 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:50.487178 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:50.986688 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:51.486942 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:51.986294 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:52.486723 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:52.986791 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:53.486879 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:53.987065 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:54.486564 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:54.986849 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:55.487099 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:55.986452 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:56.486648 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:56.986752 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:57.486982 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:57.987134 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:58.486286 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:58.986445 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:59.486346 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:42:59.986473 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:00.486195 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:00.986754 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:01.486948 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:01.986350 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:02.489899 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:02.987371 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:03.486483 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:03.986259 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:04.486599 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:04.986875 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:05.487302 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:05.986605 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:06.487068 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:06.986465 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:07.486557 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:07.986877 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:08.487291 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:08.986544 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:09.486823 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:09.987204 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:10.486292 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:10.986805 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:11.487074 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:11.986604 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:12.486832 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:12.987017 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:13.486234 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:13.986253 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:14.486400 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:14.986539 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:15.486933 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:15.987347 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:16.486789 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:16.986971 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:17.487271 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:17.986442 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:18.486727 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:18.987372 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:19.486362 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:19.986655 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:20.487046 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:20.986429 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:21.486552 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:21.986831 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:22.488199 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:22.986496 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:23.486607 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:23.986834 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:24.487132 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:24.986633 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:25.486517 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:25.986854 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:26.487211 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:26.986372 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:27.486608 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:27.986921 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:28.487100 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:28.986219 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:29.486361 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:29.987906 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:30.486917 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:30.987042 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:31.486214 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:31.986897 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:32.487529 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:32.986895 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:33.487307 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:33.986347 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:34.486240 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:34.986421 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:35.486418 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:35.986682 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:36.487254 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:36.986578 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:37.486324 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:37.986834 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:38.486971 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:38.987103 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:39.486269 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:39.986526 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:40.486302 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:40.990697 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:41.486936 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:41.987056 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:42.486389 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:42.986497 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:43.486493 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:43.986362 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:44.486515 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:44.986770 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:45.487211 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:45.986946 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:46.487204 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:46.986200 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:47.486950 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:47.986121 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:48.487030 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:48.986202 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:49.487199 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:49.986199 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:50.486587 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:50.986959 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:51.486293 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:51.986780 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:52.486710 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:52.987152 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:53.486399 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:53.986266 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:54.486291 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:54.986789 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:55.487357 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:55.986728 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:56.488291 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:56.986427 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:57.487047 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:57.986218 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:58.489830 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:58.987523 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:59.486724 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:43:59.987011 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:00.486395 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:00.986930 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:01.487376 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:01.987048 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:02.486362 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:02.986286 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:03.486327 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:03.986199 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:04.486205 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:04.986257 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:05.486517 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:05.986954 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:06.486587 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:06.986911 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:07.487220 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:07.986274 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:08.486408 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:08.986732 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:09.486899 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:09.987246 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:10.487141 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:10.986863 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:11.487131 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:11.986529 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:12.487053 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:12.987082 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:13.486474 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:13.986334 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:14.486651 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:14.986941 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:15.487740 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:15.987128 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:16.486371 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:16.986756 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:17.487141 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:17.986381 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:18.486660 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:18.987259 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:19.486442 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:19.986664 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:20.487373 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:20.986806 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:21.487025 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:21.987162 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:22.486331 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:22.986211 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:23.486085 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:23.986260 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:24.486371 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:24.986417 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:25.486322 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:25.986931 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:26.487248 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:26.986515 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:27.487212 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:27.986351 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:28.486466 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:28.986213 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:29.486453 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:29.987085 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:30.486407 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:30.986800 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:31.488164 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:31.987042 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:32.486453 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:32.986880 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:33.487316 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:33.986324 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:34.486667 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:34.986885 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:35.487140 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:35.986421 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:36.486527 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:36.987146 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:37.486358 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:37.986772 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:38.487024 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:38.986266 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:39.486429 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:39.986814 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:40.487521 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:40.987050 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:41.486257 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:41.986769 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:42.487288 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:42.986640 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:43.486953 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:43.988190 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:44.486493 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:44.986975 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:45.487313 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:45.986640 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:46.486844 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:46.987153 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:47.486441 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:47.986429 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:48.486572 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:48.986875 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:49.487146 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:49.986500 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:50.486520 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:50.986864 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:51.487103 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:51.986855 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:52.487187 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:52.986345 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:53.486472 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:53.986338 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:54.486740 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:54.987592 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:55.486887 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:55.986934 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:56.486717 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:56.987164 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:57.486321 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:57.986852 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:58.487242 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:58.986211 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:59.486382 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:44:59.986207 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:00.486301 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:00.986845 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:01.487203 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:01.986791 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:02.490168 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:02.986937 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:03.487651 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:03.987063 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:04.486228 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:04.986268 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:05.486482 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:05.986910 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:06.487253 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:06.986431 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:07.486466 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:07.986302 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:08.486800 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:08.986988 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:09.487181 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:09.986293 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:10.486585 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:10.987202 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:11.486669 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:11.986986 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:12.486166 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:12.986244 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:13.486164 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:13.986237 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:14.486265 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:14.986408 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:15.486558 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:15.986927 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:16.487083 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:16.986178 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:17.486220 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:17.989332 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:18.487192 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:18.986520 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:19.486664 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:19.986974 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:20.487357 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:20.986722 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:21.488153 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:21.986510 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:22.487167 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:22.986263 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:23.486541 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:23.986854 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:24.487179 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:24.986262 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:25.486506 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:25.987096 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:26.486162 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:26.986321 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:27.486827 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:27.986969 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:28.487542 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:28.987139 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:29.486296 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:29.986435 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:30.486633 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:30.986197 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:31.486195 2877155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 22:45:31.958933 2877155 kapi.go:107] duration metric: took 6m0.000062918s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 22:45:31.961427 2877155 out.go:177] 
	W0914 22:45:31.963752 2877155 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W0914 22:45:31.963773 2877155 out.go:239] * 
	* 
	W0914 22:45:31.974249 2877155 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 22:45:31.976182 2877155 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-438037
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-438037:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d",
	        "Created": "2023-09-14T22:38:08.28874024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2874463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T22:38:08.610860935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/hosts",
	        "LogPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d-json.log",
	        "Name": "/ingress-addon-legacy-438037",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-438037:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-438037",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-438037",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-438037/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-438037",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-438037",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-438037",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2298efcc0440e913519c26208ad72b9dea63d7fbedbb50dd7305dbc6af0cd169",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36402"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36400"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2298efcc0440",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-438037": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b0c5401d2b65",
	                        "ingress-addon-legacy-438037"
	                    ],
	                    "NetworkID": "6548aec4158f04656459ab4dc15211040015d4503b13087d384dd575ba38a18f",
	                    "EndpointID": "09b828b7258720d1768de6e899c7d16999a34b454aae7427adc2246f35138f3f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-438037 -n ingress-addon-legacy-438037
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-438037 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-438037 logs -n 25: (1.490071293s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-127648 ssh -n                                               | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | functional-127648 sudo cat                                             |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                               |                             |         |         |                     |                     |
	| cp             | functional-127648 cp                                                   | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | functional-127648:/home/docker/cp-test.txt                             |                             |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd674725946/001/cp-test.txt              |                             |         |         |                     |                     |
	| ssh            | functional-127648 ssh -n                                               | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | functional-127648 sudo cat                                             |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                               |                             |         |         |                     |                     |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-127648 image load --daemon                                  | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image save                                           | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image rm                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image load                                           | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image save --daemon                                  | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-127648 ssh pgrep                                            | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image build -t                                       | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | localhost/my-image:functional-127648                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| delete         | -p functional-127648                                                   | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| start          | -p ingress-addon-legacy-438037                                         | ingress-addon-legacy-438037 | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:39 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438037                                            | ingress-addon-legacy-438037 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:37:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:37:50.516976 2874006 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:37:50.517187 2874006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:50.517212 2874006 out.go:309] Setting ErrFile to fd 2...
	I0914 22:37:50.517232 2874006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:50.517515 2874006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:37:50.517998 2874006 out.go:303] Setting JSON to false
	I0914 22:37:50.519152 2874006 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":80415,"bootTime":1694650655,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:37:50.519242 2874006 start.go:138] virtualization:  
	I0914 22:37:50.521758 2874006 out.go:177] * [ingress-addon-legacy-438037] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:37:50.523878 2874006 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:37:50.525727 2874006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:37:50.524013 2874006 notify.go:220] Checking for updates...
	I0914 22:37:50.527706 2874006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:37:50.529391 2874006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:37:50.531096 2874006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:37:50.532861 2874006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:37:50.534915 2874006 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:37:50.558559 2874006 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:37:50.558651 2874006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:50.638811 2874006 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-14 22:37:50.629177969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:50.638914 2874006 docker.go:294] overlay module found
	I0914 22:37:50.642237 2874006 out.go:177] * Using the docker driver based on user configuration
	I0914 22:37:50.644082 2874006 start.go:298] selected driver: docker
	I0914 22:37:50.644096 2874006 start.go:902] validating driver "docker" against <nil>
	I0914 22:37:50.644107 2874006 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:37:50.644749 2874006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:50.710934 2874006 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-14 22:37:50.701749565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:50.711098 2874006 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 22:37:50.711315 2874006 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:37:50.713789 2874006 out.go:177] * Using Docker driver with root privileges
	I0914 22:37:50.715845 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:37:50.715861 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:37:50.715881 2874006 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 22:37:50.715902 2874006 start_flags.go:321] config:
	{Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:37:50.718055 2874006 out.go:177] * Starting control plane node ingress-addon-legacy-438037 in cluster ingress-addon-legacy-438037
	I0914 22:37:50.719684 2874006 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:37:50.721236 2874006 out.go:177] * Pulling base image ...
	I0914 22:37:50.723024 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:37:50.723112 2874006 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:37:50.739885 2874006 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 22:37:50.739907 2874006 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 22:37:50.788734 2874006 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0914 22:37:50.788770 2874006 cache.go:57] Caching tarball of preloaded images
	I0914 22:37:50.788953 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:37:50.791021 2874006 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0914 22:37:50.793038 2874006 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:37:50.890030 2874006 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0914 22:38:00.454953 2874006 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:38:00.455051 2874006 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:38:01.632608 2874006 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0914 22:38:01.632984 2874006 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json ...
	I0914 22:38:01.633021 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json: {Name:mk9b7219ecbd6eb32d8d24c3944a40400fe056ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:01.633213 2874006 cache.go:195] Successfully downloaded all kic artifacts
	I0914 22:38:01.633260 2874006 start.go:365] acquiring machines lock for ingress-addon-legacy-438037: {Name:mke9ca8a8be24e85471f3c976865beeaab0c5876 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:38:01.633321 2874006 start.go:369] acquired machines lock for "ingress-addon-legacy-438037" in 46.137µs
	I0914 22:38:01.633343 2874006 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:38:01.633416 2874006 start.go:125] createHost starting for "" (driver="docker")
	I0914 22:38:01.635844 2874006 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0914 22:38:01.636111 2874006 start.go:159] libmachine.API.Create for "ingress-addon-legacy-438037" (driver="docker")
	I0914 22:38:01.636146 2874006 client.go:168] LocalClient.Create starting
	I0914 22:38:01.636259 2874006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 22:38:01.636295 2874006 main.go:141] libmachine: Decoding PEM data...
	I0914 22:38:01.636315 2874006 main.go:141] libmachine: Parsing certificate...
	I0914 22:38:01.636372 2874006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 22:38:01.636393 2874006 main.go:141] libmachine: Decoding PEM data...
	I0914 22:38:01.636409 2874006 main.go:141] libmachine: Parsing certificate...
	I0914 22:38:01.636770 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 22:38:01.654950 2874006 cli_runner.go:211] docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 22:38:01.655040 2874006 network_create.go:281] running [docker network inspect ingress-addon-legacy-438037] to gather additional debugging logs...
	I0914 22:38:01.655061 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037
	W0914 22:38:01.678323 2874006 cli_runner.go:211] docker network inspect ingress-addon-legacy-438037 returned with exit code 1
	I0914 22:38:01.678356 2874006 network_create.go:284] error running [docker network inspect ingress-addon-legacy-438037]: docker network inspect ingress-addon-legacy-438037: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-438037 not found
	I0914 22:38:01.678371 2874006 network_create.go:286] output of [docker network inspect ingress-addon-legacy-438037]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-438037 not found
	
	** /stderr **
	I0914 22:38:01.678435 2874006 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:38:01.696924 2874006 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e740}
	I0914 22:38:01.696962 2874006 network_create.go:123] attempt to create docker network ingress-addon-legacy-438037 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 22:38:01.697019 2874006 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 ingress-addon-legacy-438037
	I0914 22:38:01.768185 2874006 network_create.go:107] docker network ingress-addon-legacy-438037 192.168.49.0/24 created
	I0914 22:38:01.768217 2874006 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-438037" container
	I0914 22:38:01.768292 2874006 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 22:38:01.784674 2874006 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-438037 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --label created_by.minikube.sigs.k8s.io=true
	I0914 22:38:01.803217 2874006 oci.go:103] Successfully created a docker volume ingress-addon-legacy-438037
	I0914 22:38:01.803298 2874006 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-438037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --entrypoint /usr/bin/test -v ingress-addon-legacy-438037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib
	I0914 22:38:03.331850 2874006 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-438037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --entrypoint /usr/bin/test -v ingress-addon-legacy-438037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib: (1.528512759s)
	I0914 22:38:03.331883 2874006 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-438037
	I0914 22:38:03.331910 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:38:03.331930 2874006 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 22:38:03.332024 2874006 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-438037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 22:38:08.194674 2874006 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-438037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir: (4.862603416s)
	I0914 22:38:08.194704 2874006 kic.go:199] duration metric: took 4.862772 seconds to extract preloaded images to volume
	W0914 22:38:08.194839 2874006 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 22:38:08.194956 2874006 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 22:38:08.272816 2874006 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-438037 --name ingress-addon-legacy-438037 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --network ingress-addon-legacy-438037 --ip 192.168.49.2 --volume ingress-addon-legacy-438037:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:38:08.618158 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Running}}
	I0914 22:38:08.638080 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:08.663478 2874006 cli_runner.go:164] Run: docker exec ingress-addon-legacy-438037 stat /var/lib/dpkg/alternatives/iptables
	I0914 22:38:08.737911 2874006 oci.go:144] the created container "ingress-addon-legacy-438037" has a running status.
	I0914 22:38:08.737944 2874006 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa...
	I0914 22:38:09.104703 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0914 22:38:09.104745 2874006 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 22:38:09.142940 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:09.179443 2874006 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 22:38:09.179466 2874006 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-438037 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 22:38:09.278580 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:09.310043 2874006 machine.go:88] provisioning docker machine ...
	I0914 22:38:09.310077 2874006 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-438037"
	I0914 22:38:09.310149 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:09.332042 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:09.332487 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:09.332606 2874006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-438037 && echo "ingress-addon-legacy-438037" | sudo tee /etc/hostname
	I0914 22:38:09.574346 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-438037
	
	I0914 22:38:09.574481 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:09.605001 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:09.605396 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:09.605417 2874006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-438037' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-438037/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-438037' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:38:09.753846 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:38:09.753873 2874006 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 22:38:09.753892 2874006 ubuntu.go:177] setting up certificates
	I0914 22:38:09.753900 2874006 provision.go:83] configureAuth start
	I0914 22:38:09.753965 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:09.778148 2874006 provision.go:138] copyHostCerts
	I0914 22:38:09.778197 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:38:09.778228 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 22:38:09.778238 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:38:09.778296 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 22:38:09.778370 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:38:09.778389 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 22:38:09.778394 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:38:09.778420 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 22:38:09.778459 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:38:09.778478 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 22:38:09.778482 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:38:09.778507 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 22:38:09.778563 2874006 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-438037 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-438037]
	I0914 22:38:10.029778 2874006 provision.go:172] copyRemoteCerts
	I0914 22:38:10.029850 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:38:10.029896 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.050785 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.155127 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:38:10.155257 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:38:10.183655 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:38:10.183738 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0914 22:38:10.212397 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:38:10.212461 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:38:10.241684 2874006 provision.go:86] duration metric: configureAuth took 487.768161ms
	I0914 22:38:10.241710 2874006 ubuntu.go:193] setting minikube options for container-runtime
	I0914 22:38:10.241915 2874006 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:38:10.242050 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.260384 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:10.260868 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:10.260890 2874006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:38:10.538769 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:38:10.538830 2874006 machine.go:91] provisioned docker machine in 1.228762628s
	I0914 22:38:10.538856 2874006 client.go:171] LocalClient.Create took 8.902703694s
	I0914 22:38:10.538903 2874006 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-438037" took 8.902778779s
	I0914 22:38:10.538929 2874006 start.go:300] post-start starting for "ingress-addon-legacy-438037" (driver="docker")
	I0914 22:38:10.538956 2874006 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:38:10.539058 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:38:10.539126 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.556885 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.659192 2874006 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:38:10.663483 2874006 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 22:38:10.663519 2874006 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 22:38:10.663530 2874006 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 22:38:10.663537 2874006 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 22:38:10.663549 2874006 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 22:38:10.663609 2874006 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 22:38:10.663695 2874006 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 22:38:10.663707 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /etc/ssl/certs/28461092.pem
	I0914 22:38:10.663817 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:38:10.674185 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:38:10.701205 2874006 start.go:303] post-start completed in 162.246747ms
	I0914 22:38:10.701614 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:10.719069 2874006 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json ...
	I0914 22:38:10.719350 2874006 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:38:10.719403 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.736433 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.838331 2874006 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 22:38:10.843774 2874006 start.go:128] duration metric: createHost completed in 9.210340593s
	I0914 22:38:10.843799 2874006 start.go:83] releasing machines lock for "ingress-addon-legacy-438037", held for 9.21046298s
	I0914 22:38:10.843875 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:10.861318 2874006 ssh_runner.go:195] Run: cat /version.json
	I0914 22:38:10.861334 2874006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:38:10.861377 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.861403 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.881747 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.888586 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:11.120307 2874006 ssh_runner.go:195] Run: systemctl --version
	I0914 22:38:11.125783 2874006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:38:11.270023 2874006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:38:11.275513 2874006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:38:11.299861 2874006 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 22:38:11.299974 2874006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:38:11.335487 2874006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 22:38:11.335511 2874006 start.go:469] detecting cgroup driver to use...
	I0914 22:38:11.335547 2874006 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 22:38:11.335600 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:38:11.354396 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:38:11.367998 2874006 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:38:11.368093 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:38:11.385119 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:38:11.402249 2874006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:38:11.493050 2874006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:38:11.598056 2874006 docker.go:212] disabling docker service ...
	I0914 22:38:11.598134 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:38:11.619654 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:38:11.633597 2874006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:38:11.740376 2874006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:38:11.839999 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:38:11.853361 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:38:11.872187 2874006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 22:38:11.872281 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.884251 2874006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:38:11.884317 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.896270 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.908242 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.919918 2874006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:38:11.930773 2874006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:38:11.940651 2874006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:38:11.950284 2874006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:38:12.044547 2874006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:38:12.172148 2874006 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:38:12.172226 2874006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:38:12.177244 2874006 start.go:537] Will wait 60s for crictl version
	I0914 22:38:12.177322 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:12.181788 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:38:12.220570 2874006 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 22:38:12.220673 2874006 ssh_runner.go:195] Run: crio --version
	I0914 22:38:12.264973 2874006 ssh_runner.go:195] Run: crio --version
	I0914 22:38:12.315341 2874006 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0914 22:38:12.316962 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:38:12.333951 2874006 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 22:38:12.338293 2874006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:38:12.350930 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:38:12.350994 2874006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:38:12.404233 2874006 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 22:38:12.404307 2874006 ssh_runner.go:195] Run: which lz4
	I0914 22:38:12.408770 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0914 22:38:12.408865 2874006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:38:12.413302 2874006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:38:12.413342 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0914 22:38:14.452258 2874006 crio.go:444] Took 2.043425 seconds to copy over tarball
	I0914 22:38:14.452379 2874006 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:38:17.053421 2874006 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.60101232s)
	I0914 22:38:17.053448 2874006 crio.go:451] Took 2.601117 seconds to extract the tarball
	I0914 22:38:17.053458 2874006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:38:17.402053 2874006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:38:17.444809 2874006 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 22:38:17.444833 2874006 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:38:17.444871 2874006 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:17.445074 2874006 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:17.445161 2874006 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:17.445234 2874006 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.445303 2874006 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.445365 2874006 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 22:38:17.445425 2874006 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:17.445520 2874006 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:17.446404 2874006 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:17.446821 2874006 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:17.447089 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.447241 2874006 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 22:38:17.447377 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.447501 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:17.447714 2874006 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:17.447897 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0914 22:38:17.874747 2874006 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.874925 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.920815 2874006 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0914 22:38:17.920895 2874006 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.920963 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:17.925377 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	W0914 22:38:17.930904 2874006 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.931129 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0914 22:38:17.931394 2874006 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.931650 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.949896 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0914 22:38:17.959691 2874006 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.959900 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:17.979124 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0914 22:38:17.980223 2874006 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.980472 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0914 22:38:17.980919 2874006 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.981100 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0914 22:38:18.014398 2874006 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:18.014618 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.153331 2874006 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0914 22:38:18.153390 2874006 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:18.153443 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.153518 2874006 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0914 22:38:18.153540 2874006 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:18.153561 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.180702 2874006 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0914 22:38:18.180764 2874006 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 22:38:18.180825 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.180973 2874006 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0914 22:38:18.181019 2874006 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:18.181059 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.195779 2874006 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0914 22:38:18.195843 2874006 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:18.195915 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.196012 2874006 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0914 22:38:18.196044 2874006 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:18.196078 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.289857 2874006 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 22:38:18.289936 2874006 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.289986 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:18.289994 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.290092 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:18.290132 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:18.290103 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 22:38:18.290173 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:18.290205 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0914 22:38:18.438497 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0914 22:38:18.438563 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0914 22:38:18.438613 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.438688 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0914 22:38:18.441986 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0914 22:38:18.442080 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0914 22:38:18.447133 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0914 22:38:18.506234 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:38:18.506306 2874006 cache_images.go:92] LoadImages completed in 1.061458724s
	W0914 22:38:18.506367 2874006 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0914 22:38:18.506446 2874006 ssh_runner.go:195] Run: crio config
	I0914 22:38:18.572127 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:38:18.572197 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:38:18.572245 2874006 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:38:18.572288 2874006 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-438037 NodeName:ingress-addon-legacy-438037 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:38:18.572471 2874006 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-438037"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:38:18.572602 2874006 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-438037 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:38:18.572689 2874006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0914 22:38:18.583132 2874006 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:38:18.583227 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:38:18.593542 2874006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0914 22:38:18.613545 2874006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0914 22:38:18.633331 2874006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 22:38:18.653222 2874006 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 22:38:18.657536 2874006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:38:18.670558 2874006 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037 for IP: 192.168.49.2
	I0914 22:38:18.670587 2874006 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:18.670725 2874006 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 22:38:18.670770 2874006 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 22:38:18.670817 2874006 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key
	I0914 22:38:18.670834 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt with IP's: []
	I0914 22:38:19.236344 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt ...
	I0914 22:38:19.236371 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: {Name:mkc07f926e47dd7d4a3a52c66086888f6611c161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.236594 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key ...
	I0914 22:38:19.236607 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key: {Name:mk443293d6a4cc6d753f8ccc8849273d56660101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.236698 2874006 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2
	I0914 22:38:19.236718 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 22:38:19.679754 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 ...
	I0914 22:38:19.679783 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2: {Name:mkd57d3090f93d8fab2f514d3f90d19e1e49e7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.679966 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2 ...
	I0914 22:38:19.679979 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2: {Name:mk890fa1d3c6c217dab198b706f5e63d213e8bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.680068 2874006 certs.go:337] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt
	I0914 22:38:19.680146 2874006 certs.go:341] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key
	I0914 22:38:19.680205 2874006 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key
	I0914 22:38:19.680221 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt with IP's: []
	I0914 22:38:20.315404 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt ...
	I0914 22:38:20.315433 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt: {Name:mk2ddfde646fde62c20235a38ba8af63e946e80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:20.315619 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key ...
	I0914 22:38:20.315632 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key: {Name:mk82780cd8bda8c67e216cb828a35fd78be8194b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:20.315710 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 22:38:20.315726 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 22:38:20.315738 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 22:38:20.315752 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 22:38:20.315764 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:38:20.315782 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:38:20.315794 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:38:20.315805 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:38:20.315863 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 22:38:20.315903 2874006 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 22:38:20.315917 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:38:20.315953 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:38:20.315987 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:38:20.316016 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 22:38:20.316062 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:38:20.316098 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.316114 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem -> /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.316126 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.316758 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:38:20.344570 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:38:20.372606 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:38:20.400626 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 22:38:20.427775 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:38:20.455569 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 22:38:20.485024 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:38:20.515114 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:38:20.544955 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:38:20.573029 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 22:38:20.600590 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 22:38:20.627407 2874006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:38:20.647610 2874006 ssh_runner.go:195] Run: openssl version
	I0914 22:38:20.654411 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:38:20.666144 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.670862 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.670923 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.679378 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:38:20.691165 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 22:38:20.702694 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.707326 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.707436 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.715864 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 22:38:20.727508 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 22:38:20.739071 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.743724 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.743787 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.752032 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:38:20.763443 2874006 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:38:20.767732 2874006 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:38:20.767782 2874006 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:38:20.767855 2874006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:38:20.767913 2874006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:38:20.813940 2874006 cri.go:89] found id: ""
	I0914 22:38:20.814007 2874006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:38:20.824712 2874006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:38:20.835192 2874006 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 22:38:20.835276 2874006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:38:20.845640 2874006 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:38:20.845735 2874006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 22:38:20.901714 2874006 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0914 22:38:20.902001 2874006 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:38:20.954183 2874006 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:38:20.954278 2874006 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:38:20.954316 2874006 kubeadm.go:322] OS: Linux
	I0914 22:38:20.954362 2874006 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 22:38:20.954410 2874006 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 22:38:20.954458 2874006 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 22:38:20.954512 2874006 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 22:38:20.954560 2874006 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 22:38:20.954612 2874006 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 22:38:21.044516 2874006 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:38:21.044623 2874006 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:38:21.044714 2874006 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:38:21.274414 2874006 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:38:21.275809 2874006 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:38:21.276062 2874006 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:38:21.384887 2874006 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:38:21.390189 2874006 out.go:204]   - Generating certificates and keys ...
	I0914 22:38:21.390379 2874006 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:38:21.390483 2874006 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:38:22.388587 2874006 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 22:38:22.682617 2874006 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 22:38:23.047196 2874006 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 22:38:23.726309 2874006 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 22:38:23.877082 2874006 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 22:38:23.877705 2874006 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-438037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:38:25.534565 2874006 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 22:38:25.534935 2874006 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-438037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:38:26.379184 2874006 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 22:38:26.584991 2874006 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 22:38:27.187194 2874006 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 22:38:27.187436 2874006 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:38:27.525918 2874006 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:38:28.245290 2874006 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:38:28.942501 2874006 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:38:29.718592 2874006 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:38:29.719274 2874006 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:38:29.721890 2874006 out.go:204]   - Booting up control plane ...
	I0914 22:38:29.721986 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:38:29.737480 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:38:29.739104 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:38:29.740296 2874006 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:38:29.743140 2874006 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:38:41.247068 2874006 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502590 seconds
	I0914 22:38:41.247189 2874006 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:38:41.257140 2874006 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:38:41.786435 2874006 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:38:41.786584 2874006 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-438037 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:38:42.297945 2874006 kubeadm.go:322] [bootstrap-token] Using token: jnwj02.72mnz06o7v62mu14
	I0914 22:38:42.300132 2874006 out.go:204]   - Configuring RBAC rules ...
	I0914 22:38:42.300255 2874006 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:38:42.303885 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:38:42.317761 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:38:42.324842 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:38:42.335074 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:38:42.339072 2874006 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:38:42.351714 2874006 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:38:42.662180 2874006 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:38:42.799357 2874006 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:38:42.799375 2874006 kubeadm.go:322] 
	I0914 22:38:42.799465 2874006 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:38:42.799487 2874006 kubeadm.go:322] 
	I0914 22:38:42.799582 2874006 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:38:42.799593 2874006 kubeadm.go:322] 
	I0914 22:38:42.799629 2874006 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:38:42.799702 2874006 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:38:42.799750 2874006 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:38:42.799755 2874006 kubeadm.go:322] 
	I0914 22:38:42.799809 2874006 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:38:42.799900 2874006 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:38:42.799976 2874006 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:38:42.799991 2874006 kubeadm.go:322] 
	I0914 22:38:42.800079 2874006 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:38:42.800169 2874006 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:38:42.800177 2874006 kubeadm.go:322] 
	I0914 22:38:42.800259 2874006 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnwj02.72mnz06o7v62mu14 \
	I0914 22:38:42.800363 2874006 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 22:38:42.800388 2874006 kubeadm.go:322]     --control-plane 
	I0914 22:38:42.800395 2874006 kubeadm.go:322] 
	I0914 22:38:42.800475 2874006 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:38:42.800506 2874006 kubeadm.go:322] 
	I0914 22:38:42.800584 2874006 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnwj02.72mnz06o7v62mu14 \
	I0914 22:38:42.800687 2874006 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:38:42.803411 2874006 kubeadm.go:322] W0914 22:38:20.900891    1226 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0914 22:38:42.803625 2874006 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:38:42.803727 2874006 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:38:42.803855 2874006 kubeadm.go:322] W0914 22:38:29.737141    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 22:38:42.803983 2874006 kubeadm.go:322] W0914 22:38:29.739186    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 22:38:42.804001 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:38:42.804010 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:38:42.805875 2874006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 22:38:42.807684 2874006 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:38:42.812456 2874006 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0914 22:38:42.812479 2874006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:38:42.834382 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:38:43.282661 2874006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:38:43.282776 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.282777 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=ingress-addon-legacy-438037 minikube.k8s.io/updated_at=2023_09_14T22_38_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.422303 2874006 ops.go:34] apiserver oom_adj: -16
	I0914 22:38:43.422418 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.514690 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:44.106220 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:44.605883 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:45.106062 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:45.606239 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:46.106220 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:46.605622 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:47.105824 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:47.606650 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:48.106512 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:48.606477 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:49.106530 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:49.606481 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:50.106598 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:50.606101 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:51.106250 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:51.606645 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:52.105792 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:52.606326 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:53.105652 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:53.606184 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:54.105657 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:54.606438 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:55.106448 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:55.606024 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:56.105650 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:56.605667 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:57.105732 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:57.605898 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:58.106117 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:58.212763 2874006 kubeadm.go:1081] duration metric: took 14.930045065s to wait for elevateKubeSystemPrivileges.
	I0914 22:38:58.212794 2874006 kubeadm.go:406] StartCluster complete in 37.445016104s
	I0914 22:38:58.212811 2874006 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:58.212868 2874006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:38:58.213577 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:58.214272 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.215625 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:38:58.215863 2874006 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:38:58.215900 2874006 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:38:58.215957 2874006 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-438037"
	I0914 22:38:58.215971 2874006 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-438037"
	I0914 22:38:58.216026 2874006 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:38:58.216473 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.216989 2874006 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 22:38:58.217031 2874006 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-438037"
	I0914 22:38:58.217048 2874006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-438037"
	I0914 22:38:58.217309 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.264394 2874006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:58.266436 2874006 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:38:58.266458 2874006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:38:58.266534 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:58.278577 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.297252 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	W0914 22:38:58.334273 2874006 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-438037" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:38:58.334303 2874006 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:38:58.334326 2874006 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:38:58.336481 2874006 out.go:177] * Verifying Kubernetes components...
	I0914 22:38:58.338863 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:38:58.340645 2874006 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-438037"
	I0914 22:38:58.340679 2874006 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:38:58.341120 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.371511 2874006 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:38:58.371536 2874006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:38:58.371598 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:58.407047 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:58.453507 2874006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:38:58.495061 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:38:58.495553 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.495807 2874006 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-438037" to be "Ready" ...
	I0914 22:38:58.641109 2874006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:38:59.093546 2874006 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 22:38:59.097066 2874006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 22:38:59.100985 2874006 addons.go:502] enable addons completed in 885.070906ms: enabled=[storage-provisioner default-storageclass]
	I0914 22:39:00.567086 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:03.065939 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:05.066123 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:07.566778 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:10.066400 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:12.066940 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:14.566048 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:16.566084 2874006 node_ready.go:49] node "ingress-addon-legacy-438037" has status "Ready":"True"
	I0914 22:39:16.566111 2874006 node_ready.go:38] duration metric: took 18.07028321s waiting for node "ingress-addon-legacy-438037" to be "Ready" ...
	I0914 22:39:16.566123 2874006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:39:16.574537 2874006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:18.581880 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-14 22:38:58 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 22:39:20.584675 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:22.584731 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:25.084619 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:25.591194 2874006 pod_ready.go:92] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:25.591220 2874006 pod_ready.go:81] duration metric: took 9.01664169s waiting for pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:25.591231 2874006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:27.609926 2874006 pod_ready.go:102] pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:29.610327 2874006 pod_ready.go:92] pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.610352 2874006 pod_ready.go:81] duration metric: took 4.019113284s waiting for pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.610364 2874006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.614890 2874006 pod_ready.go:92] pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.614914 2874006 pod_ready.go:81] duration metric: took 4.541863ms waiting for pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.614928 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.619327 2874006 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.619348 2874006 pod_ready.go:81] duration metric: took 4.412616ms waiting for pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.619359 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.623761 2874006 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.623782 2874006 pod_ready.go:81] duration metric: took 4.416194ms waiting for pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.623792 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79mhd" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.628226 2874006 pod_ready.go:92] pod "kube-proxy-79mhd" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.628244 2874006 pod_ready.go:81] duration metric: took 4.445379ms waiting for pod "kube-proxy-79mhd" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.628254 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.805622 2874006 request.go:629] Waited for 177.293206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-438037
	I0914 22:39:30.005814 2874006 request.go:629] Waited for 197.349398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-438037
	I0914 22:39:30.008713 2874006 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:30.008738 2874006 pod_ready.go:81] duration metric: took 380.477176ms waiting for pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:30.008751 2874006 pod_ready.go:38] duration metric: took 13.442617969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:39:30.008769 2874006 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:39:30.008829 2874006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:39:30.023339 2874006 api_server.go:72] duration metric: took 31.688975029s to wait for apiserver process to appear ...
	I0914 22:39:30.023366 2874006 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:39:30.023385 2874006 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 22:39:30.032956 2874006 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 22:39:30.033866 2874006 api_server.go:141] control plane version: v1.18.20
	I0914 22:39:30.033891 2874006 api_server.go:131] duration metric: took 10.516849ms to wait for apiserver health ...
	I0914 22:39:30.033900 2874006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:39:30.205236 2874006 request.go:629] Waited for 171.23182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:39:30.211376 2874006 system_pods.go:59] 9 kube-system pods found
	I0914 22:39:30.211411 2874006 system_pods.go:61] "coredns-66bff467f8-5vlzt" [6e80d32c-0f03-48b3-a30a-21f772c3a5c1] Running
	I0914 22:39:30.211418 2874006 system_pods.go:61] "coredns-66bff467f8-hzd5r" [6df64232-0e4b-4f95-863f-8195e0b19ed6] Running
	I0914 22:39:30.211424 2874006 system_pods.go:61] "etcd-ingress-addon-legacy-438037" [dd33171a-d5ff-434f-95b7-48f30add3ebb] Running
	I0914 22:39:30.211429 2874006 system_pods.go:61] "kindnet-ft9s6" [d5386d34-1bfd-488c-a959-d4847ddb8a76] Running
	I0914 22:39:30.211435 2874006 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-438037" [15c1a842-b237-4448-b215-17be2692d221] Running
	I0914 22:39:30.211440 2874006 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-438037" [dcf4fd51-6e75-46c4-93c4-59e6ef3deb4c] Running
	I0914 22:39:30.211445 2874006 system_pods.go:61] "kube-proxy-79mhd" [a9cc9c4a-d968-4403-a34b-9ea2c671326f] Running
	I0914 22:39:30.211450 2874006 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-438037" [61e4ecc7-a0f5-412c-ad2f-e3e5cce42226] Running
	I0914 22:39:30.211461 2874006 system_pods.go:61] "storage-provisioner" [0a1d1b79-2747-4d8d-8b93-c687e75482f0] Running
	I0914 22:39:30.211471 2874006 system_pods.go:74] duration metric: took 177.564838ms to wait for pod list to return data ...
	I0914 22:39:30.211482 2874006 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:39:30.405891 2874006 request.go:629] Waited for 194.317548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:39:30.408360 2874006 default_sa.go:45] found service account: "default"
	I0914 22:39:30.408391 2874006 default_sa.go:55] duration metric: took 196.899051ms for default service account to be created ...
	I0914 22:39:30.408400 2874006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:39:30.605661 2874006 request.go:629] Waited for 197.196527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:39:30.612068 2874006 system_pods.go:86] 9 kube-system pods found
	I0914 22:39:30.612101 2874006 system_pods.go:89] "coredns-66bff467f8-5vlzt" [6e80d32c-0f03-48b3-a30a-21f772c3a5c1] Running
	I0914 22:39:30.612107 2874006 system_pods.go:89] "coredns-66bff467f8-hzd5r" [6df64232-0e4b-4f95-863f-8195e0b19ed6] Running
	I0914 22:39:30.612113 2874006 system_pods.go:89] "etcd-ingress-addon-legacy-438037" [dd33171a-d5ff-434f-95b7-48f30add3ebb] Running
	I0914 22:39:30.612117 2874006 system_pods.go:89] "kindnet-ft9s6" [d5386d34-1bfd-488c-a959-d4847ddb8a76] Running
	I0914 22:39:30.612122 2874006 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-438037" [15c1a842-b237-4448-b215-17be2692d221] Running
	I0914 22:39:30.612134 2874006 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-438037" [dcf4fd51-6e75-46c4-93c4-59e6ef3deb4c] Running
	I0914 22:39:30.612139 2874006 system_pods.go:89] "kube-proxy-79mhd" [a9cc9c4a-d968-4403-a34b-9ea2c671326f] Running
	I0914 22:39:30.612144 2874006 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-438037" [61e4ecc7-a0f5-412c-ad2f-e3e5cce42226] Running
	I0914 22:39:30.612152 2874006 system_pods.go:89] "storage-provisioner" [0a1d1b79-2747-4d8d-8b93-c687e75482f0] Running
	I0914 22:39:30.612159 2874006 system_pods.go:126] duration metric: took 203.753282ms to wait for k8s-apps to be running ...
	I0914 22:39:30.612173 2874006 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:39:30.612234 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:39:30.625811 2874006 system_svc.go:56] duration metric: took 13.625926ms WaitForService to wait for kubelet.
	I0914 22:39:30.625876 2874006 kubeadm.go:581] duration metric: took 32.291520904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:39:30.625910 2874006 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:39:30.805158 2874006 request.go:629] Waited for 179.157706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0914 22:39:30.807966 2874006 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:39:30.807998 2874006 node_conditions.go:123] node cpu capacity is 2
	I0914 22:39:30.808011 2874006 node_conditions.go:105] duration metric: took 182.089134ms to run NodePressure ...
	I0914 22:39:30.808023 2874006 start.go:228] waiting for startup goroutines ...
	I0914 22:39:30.808029 2874006 start.go:233] waiting for cluster config update ...
	I0914 22:39:30.808039 2874006 start.go:242] writing updated cluster config ...
	I0914 22:39:30.808324 2874006 ssh_runner.go:195] Run: rm -f paused
	I0914 22:39:30.862128 2874006 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0914 22:39:30.865248 2874006 out.go:177] 
	W0914 22:39:30.867843 2874006 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0914 22:39:30.870043 2874006 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0914 22:39:30.872489 2874006 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-438037" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 14 22:44:00 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:00.604627706Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:12 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:12.081467806Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e12e983d-7148-4208-941b-06b669fe8dd1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:12 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:12.081738510Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e12e983d-7148-4208-941b-06b669fe8dd1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:24 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:24.081329517Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=bd8b713b-151c-4d2c-a254-ffbafb995c27 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:24 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:24.081606636Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=bd8b713b-151c-4d2c-a254-ffbafb995c27 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:38 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:38.081361804Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=0d4dc48f-03c8-4637-90d8-47f9a77ec56f name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:38 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:38.081636716Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=0d4dc48f-03c8-4637-90d8-47f9a77ec56f name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:45 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:45.081354921Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=47b46605-b418-4199-b550-02fb45254d25 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:45 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:45.081640714Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=47b46605-b418-4199-b550-02fb45254d25 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:52 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:52.081333861Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=4a2dc3ce-97c2-403e-8b79-0dd473278f0d name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:52 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:52.081649258Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=4a2dc3ce-97c2-403e-8b79-0dd473278f0d name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:56 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:56.081383146Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e9fe53e8-da32-46a7-9773-60d00415a3ae name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:44:56 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:44:56.081652676Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e9fe53e8-da32-46a7-9773-60d00415a3ae name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:03 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:03.081242198Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=376ad4a5-0dd8-46dc-b0cc-e8d3278a1217 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:03 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:03.081527884Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=376ad4a5-0dd8-46dc-b0cc-e8d3278a1217 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:10 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:10.081133039Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=eab03861-bbcd-4150-8be5-664201ffa17a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:10 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:10.081425683Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=eab03861-bbcd-4150-8be5-664201ffa17a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:16 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:16.081455242Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=d97bb549-7fa2-45a5-8bd9-226ae7e78400 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:16 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:16.081713498Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=d97bb549-7fa2-45a5-8bd9-226ae7e78400 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:21 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:21.081214148Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=0a601ffe-5e20-4d4c-b817-ac776077a663 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:21 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:21.081503272Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=0a601ffe-5e20-4d4c-b817-ac776077a663 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:28.081281601Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6abc9bec-34f9-4b86-b158-d05cbd7823e9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:28.081561150Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=6abc9bec-34f9-4b86-b158-d05cbd7823e9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:28.082074881Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=25531f04-6784-4cb4-bc22-2559b39826bc name=/runtime.v1alpha2.ImageService/PullImage
	Sep 14 22:45:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:28.084266149Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cafe9f18505ce       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   0f1b0c9298086       storage-provisioner
	b1e4183cba37c       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   ccb2db598c723       coredns-66bff467f8-hzd5r
	9f402e75947ee       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   c04499f3b2a79       coredns-66bff467f8-5vlzt
	3f27f63906e23       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   aaaa4ba223b42       kindnet-ft9s6
	780e22127b8db       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   5906684b7acd1       kube-proxy-79mhd
	623b6b437d505       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   197b4ed6dc804       kube-apiserver-ingress-addon-legacy-438037
	81d8212acfd52       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  7 minutes ago       Running             kube-scheduler            0                   211a90213946c       kube-scheduler-ingress-addon-legacy-438037
	83f98203d414d       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  7 minutes ago       Running             kube-controller-manager   0                   b280add1026b6       kube-controller-manager-ingress-addon-legacy-438037
	55334ffa86b91       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  7 minutes ago       Running             etcd                      0                   45478d10af744       etcd-ingress-addon-legacy-438037
	
	* 
	* ==> coredns [9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:35939 - 5770 "HINFO IN 8229871420030904252.7931612907154627456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03709457s
	
	* 
	* ==> coredns [b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:53976 - 51578 "HINFO IN 2214536823561160362.6027128080179966188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022974876s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-438037
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-438037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=ingress-addon-legacy-438037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_38_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-438037
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:39:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-438037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 88171635ae7c44a0b058e3522c445eb5
	  System UUID:                e886bf26-0baa-409c-95b7-680bfcd56e0f
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-ghrnm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-admission-patch-h4zhs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-s8f7c              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-66bff467f8-5vlzt                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m35s
	  kube-system                 coredns-66bff467f8-hzd5r                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m35s
	  kube-system                 etcd-ingress-addon-legacy-438037                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kindnet-ft9s6                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m36s
	  kube-system                 kube-apiserver-ingress-addon-legacy-438037             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-438037    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-proxy-79mhd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-scheduler-ingress-addon-legacy-438037             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             280Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  7m2s (x4 over 7m2s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s (x5 over 7m2s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m2s (x4 over 7m2s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m47s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m47s                kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s                kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s                kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m34s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m17s                kubelet     Node ingress-addon-legacy-438037 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001074] FS-Cache: O-key=[8] '85703b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000899] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001017] FS-Cache: N-key=[8] '85703b0000000000'
	[  +2.012590] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000000e476c3
	[  +0.001056] FS-Cache: O-key=[8] '84703b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.001024] FS-Cache: N-key=[8] '84703b0000000000'
	[  +0.406786] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000007a274cdd
	[  +0.001021] FS-Cache: O-key=[8] '8a703b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000038968ff8
	[  +0.001006] FS-Cache: N-key=[8] '8a703b0000000000'
	[  +4.128718] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=000000ea [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000fe6607cc{9P.session} n=000000001f02128f
	[  +0.001183] FS-Cache: O-key=[10] '34333134393838363731'
	[  +0.000776] FS-Cache: N-cookie c=000000eb [p=00000002 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=00000000fe6607cc{9P.session} n=00000000648dde5c
	[  +0.001093] FS-Cache: N-key=[10] '34333134393838363731'
	
	* 
	* ==> etcd [55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154] <==
	* raft2023/09/14 22:38:32 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/14 22:38:32 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 22:38:32.988159 W | auth: simple token is not cryptographically signed
	2023-09-14 22:38:32.993053 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-14 22:38:32.994110 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 22:38:32.995688 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 22:38:32.995860 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-09-14 22:38:32.995963 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 22:38:32.996029 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/14 22:38:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-14 22:38:33.902080 I | etcdserver: published {Name:ingress-addon-legacy-438037 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-14 22:38:33.936549 I | embed: ready to serve client requests
	2023-09-14 22:38:33.981814 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 22:38:34.126674 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-14 22:38:34.217415 I | embed: ready to serve client requests
	2023-09-14 22:38:34.316518 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-14 22:38:34.316703 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-14 22:38:34.348020 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  22:45:33 up 22:27,  0 users,  load average: 0.01, 0.45, 1.17
	Linux ingress-addon-legacy-438037 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d] <==
	* I0914 22:43:32.085564       1 main.go:227] handling current node
	I0914 22:43:42.097188       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:43:42.097219       1 main.go:227] handling current node
	I0914 22:43:52.100669       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:43:52.100694       1 main.go:227] handling current node
	I0914 22:44:02.104106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:02.104136       1 main.go:227] handling current node
	I0914 22:44:12.113208       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:12.113237       1 main.go:227] handling current node
	I0914 22:44:22.122309       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:22.122338       1 main.go:227] handling current node
	I0914 22:44:32.126327       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:32.126353       1 main.go:227] handling current node
	I0914 22:44:42.137130       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:42.137161       1 main.go:227] handling current node
	I0914 22:44:52.142988       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:44:52.143018       1 main.go:227] handling current node
	I0914 22:45:02.146001       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:02.146032       1 main.go:227] handling current node
	I0914 22:45:12.155524       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:12.155552       1 main.go:227] handling current node
	I0914 22:45:22.167674       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:22.167700       1 main.go:227] handling current node
	I0914 22:45:32.179930       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:32.179960       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3] <==
	* I0914 22:38:39.765305       1 establishing_controller.go:76] Starting EstablishingController
	I0914 22:38:39.765321       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I0914 22:38:39.765338       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 22:38:39.839668       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:38:39.839959       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:38:39.839985       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0914 22:38:39.857395       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0914 22:38:39.857423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:38:40.724327       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 22:38:40.724353       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 22:38:40.730749       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0914 22:38:40.733588       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0914 22:38:40.733609       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0914 22:38:41.100849       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:38:41.137382       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 22:38:41.286792       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0914 22:38:41.287704       1 controller.go:609] quota admission added evaluator for: endpoints
	I0914 22:38:41.290588       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:38:42.127052       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0914 22:38:42.638413       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0914 22:38:42.734517       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0914 22:38:46.044853       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:38:57.802109       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0914 22:38:58.189041       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0914 22:39:31.825384       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5] <==
	* W0914 22:38:57.991523       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-438037. Assuming now as a timestamp.
	I0914 22:38:57.991567       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0914 22:38:57.991777       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0914 22:38:57.992229       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-438037", UID:"2bb82b25-7e67-4c54-a542-14588ce226a3", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-438037 event: Registered Node ingress-addon-legacy-438037 in Controller
	I0914 22:38:58.040606       1 request.go:621] Throttling request took 1.001863264s, request: GET:https://control-plane.minikube.internal:8443/apis/policy/v1beta1?timeout=32s
	I0914 22:38:58.185872       1 shared_informer.go:230] Caches are synced for deployment 
	I0914 22:38:58.192048       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"875d0043-6e29-40db-add2-ed41ecc45680", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0914 22:38:58.192768       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0914 22:38:58.228406       1 shared_informer.go:230] Caches are synced for disruption 
	I0914 22:38:58.228433       1 disruption.go:339] Sending events to api server.
	I0914 22:38:58.235092       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0914 22:38:58.283132       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4cec48ed-2f94-43f6-b197-c13e7c73ff54", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-hzd5r
	I0914 22:38:58.286588       1 shared_informer.go:230] Caches are synced for HPA 
	I0914 22:38:58.342658       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4cec48ed-2f94-43f6-b197-c13e7c73ff54", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-5vlzt
	I0914 22:38:58.345199       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:38:58.409364       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:38:58.409538       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:38:58.412725       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 22:38:58.692900       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0914 22:38:58.692946       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:39:17.992419       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0914 22:39:31.798162       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e909c5a6-2d53-4567-a107-575a5f6707f7", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0914 22:39:31.816010       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3333c8f2-1cc7-45f3-9e1b-d4b53cf0d3f8", APIVersion:"apps/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-s8f7c
	I0914 22:39:31.869615       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"bab0da64-8cc0-4661-90fe-f556e7025d46", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ghrnm
	I0914 22:39:31.933935       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"dad9e579-0d55-4a41-b241-821cb4d3d12e", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h4zhs
	
	* 
	* ==> kube-proxy [780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543] <==
	* W0914 22:38:59.016610       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0914 22:38:59.047645       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0914 22:38:59.047699       1 server_others.go:186] Using iptables Proxier.
	I0914 22:38:59.054629       1 server.go:583] Version: v1.18.20
	I0914 22:38:59.056768       1 config.go:315] Starting service config controller
	I0914 22:38:59.056790       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0914 22:38:59.057487       1 config.go:133] Starting endpoints config controller
	I0914 22:38:59.057508       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0914 22:38:59.157105       1 shared_informer.go:230] Caches are synced for service config 
	I0914 22:38:59.157705       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f] <==
	* W0914 22:38:39.860166       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:38:39.860212       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:38:39.901203       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:38:39.901225       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:38:39.903646       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0914 22:38:39.903935       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:38:39.903952       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:38:39.903977       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 22:38:39.906140       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:38:39.912169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:38:39.912355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:39.913053       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:38:39.913148       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:38:39.913251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:38:39.913391       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:38:39.913484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:38:39.913576       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:38:39.913661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:38:39.913743       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:39.913938       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:38:40.846790       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:40.953053       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:38:40.957509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0914 22:38:41.404018       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0914 22:38:58.600602       1 factory.go:503] pod: kube-system/coredns-66bff467f8-5vlzt is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Sep 14 22:43:02 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:02.082486    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:43:17 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:17.081794    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:43:42 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:42.896286    1632 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Sep 14 22:43:42 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:42.896385    1632 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fac0d695-472b-458c-a2ab-8eafe202cc26-webhook-cert podName:fac0d695-472b-458c-a2ab-8eafe202cc26 nodeName:}" failed. No retries permitted until 2023-09-14 22:45:44.896359614 +0000 UTC m=+422.338265123 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fac0d695-472b-458c-a2ab-8eafe202cc26-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-s8f7c\" (UID: \"fac0d695-472b-458c-a2ab-8eafe202cc26\") : secret \"ingress-nginx-admission\" not found"
	Sep 14 22:43:46 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:46.158959    1632 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d, memory: /docker/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/system.slice/kubelet.service
	Sep 14 22:43:52 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:52.081027    1632 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-s8f7c_ingress-nginx(fac0d695-472b-458c-a2ab-8eafe202cc26)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-mdwdb]: timed out waiting for the condition; skipping pod
	Sep 14 22:43:52 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:43:52.081065    1632 pod_workers.go:191] Error syncing pod fac0d695-472b-458c-a2ab-8eafe202cc26 ("ingress-nginx-controller-7fcf777cb7-s8f7c_ingress-nginx(fac0d695-472b-458c-a2ab-8eafe202cc26)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-mdwdb]: timed out waiting for the condition
	Sep 14 22:44:00 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:00.601527    1632 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:00 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:00.601588    1632 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:00 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:00.601831    1632 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:00 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:00.601867    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Sep 14 22:44:12 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:12.082217    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:24 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:24.082635    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:30 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:30.887188    1632 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:30 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:30.887246    1632 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:30 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:30.887308    1632 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:44:30 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:30.887341    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Sep 14 22:44:38 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:38.081766    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:45 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:45.081893    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:52 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:52.082041    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:44:56 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:44:56.082471    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:45:03 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:45:03.081756    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:45:10 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:45:10.082806    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:45:16 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:45:16.082078    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:45:21 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:45:21.081712    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739] <==
	* I0914 22:39:23.648342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:39:23.664111       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:39:23.664196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:39:23.671255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:39:23.671509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d!
	I0914 22:39:23.676553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc667857-4d30-4b1a-bf28-250208f6dcee", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d became leader
	I0914 22:39:23.772266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-438037 -n ingress-addon-legacy-438037
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-438037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c: exit status 1 (82.646127ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ghrnm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h4zhs" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-s8f7c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-438037 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0914 22:46:42.928342 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-438037 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.071453573s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-s8f7c

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-438037
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-438037:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d",
	        "Created": "2023-09-14T22:38:08.28874024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2874463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T22:38:08.610860935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/hosts",
	        "LogPath": "/var/lib/docker/containers/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d/b0c5401d2b6589a2101ad70a82b98c8003160ae7878e4de50a6024b59febda5d-json.log",
	        "Name": "/ingress-addon-legacy-438037",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-438037:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-438037",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a07883f0a34ec5c58b12d2c5d26526d6aeb416f022975246bd3bc2271fbb7c34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-438037",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-438037/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-438037",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-438037",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-438037",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2298efcc0440e913519c26208ad72b9dea63d7fbedbb50dd7305dbc6af0cd169",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36402"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36400"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2298efcc0440",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-438037": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b0c5401d2b65",
	                        "ingress-addon-legacy-438037"
	                    ],
	                    "NetworkID": "6548aec4158f04656459ab4dc15211040015d4503b13087d384dd575ba38a18f",
	                    "EndpointID": "09b828b7258720d1768de6e899c7d16999a34b454aae7427adc2246f35138f3f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-438037 -n ingress-addon-legacy-438037
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-438037 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-438037 logs -n 25: (1.441604776s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| cp             | functional-127648 cp                                                   | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | functional-127648:/home/docker/cp-test.txt                             |                             |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd674725946/001/cp-test.txt              |                             |         |         |                     |                     |
	| ssh            | functional-127648 ssh -n                                               | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | functional-127648 sudo cat                                             |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                               |                             |         |         |                     |                     |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-127648 image load --daemon                                  | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image save                                           | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image rm                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image load                                           | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| image          | functional-127648 image save --daemon                                  | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-127648               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-127648 ssh pgrep                                            | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648                                                      | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-127648 image build -t                                       | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	|                | localhost/my-image:functional-127648                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-127648 image ls                                             | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| delete         | -p functional-127648                                                   | functional-127648           | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:37 UTC |
	| start          | -p ingress-addon-legacy-438037                                         | ingress-addon-legacy-438037 | jenkins | v1.31.2 | 14 Sep 23 22:37 UTC | 14 Sep 23 22:39 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438037                                            | ingress-addon-legacy-438037 | jenkins | v1.31.2 | 14 Sep 23 22:39 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-438037                                            | ingress-addon-legacy-438037 | jenkins | v1.31.2 | 14 Sep 23 22:45 UTC | 14 Sep 23 22:45 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:37:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:37:50.516976 2874006 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:37:50.517187 2874006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:50.517212 2874006 out.go:309] Setting ErrFile to fd 2...
	I0914 22:37:50.517232 2874006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:50.517515 2874006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:37:50.517998 2874006 out.go:303] Setting JSON to false
	I0914 22:37:50.519152 2874006 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":80415,"bootTime":1694650655,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:37:50.519242 2874006 start.go:138] virtualization:  
	I0914 22:37:50.521758 2874006 out.go:177] * [ingress-addon-legacy-438037] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:37:50.523878 2874006 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:37:50.525727 2874006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:37:50.524013 2874006 notify.go:220] Checking for updates...
	I0914 22:37:50.527706 2874006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:37:50.529391 2874006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:37:50.531096 2874006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:37:50.532861 2874006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:37:50.534915 2874006 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:37:50.558559 2874006 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:37:50.558651 2874006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:50.638811 2874006 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-14 22:37:50.629177969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:50.638914 2874006 docker.go:294] overlay module found
	I0914 22:37:50.642237 2874006 out.go:177] * Using the docker driver based on user configuration
	I0914 22:37:50.644082 2874006 start.go:298] selected driver: docker
	I0914 22:37:50.644096 2874006 start.go:902] validating driver "docker" against <nil>
	I0914 22:37:50.644107 2874006 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:37:50.644749 2874006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:50.710934 2874006 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-14 22:37:50.701749565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:50.711098 2874006 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 22:37:50.711315 2874006 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:37:50.713789 2874006 out.go:177] * Using Docker driver with root privileges
	I0914 22:37:50.715845 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:37:50.715861 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:37:50.715881 2874006 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 22:37:50.715902 2874006 start_flags.go:321] config:
	{Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:37:50.718055 2874006 out.go:177] * Starting control plane node ingress-addon-legacy-438037 in cluster ingress-addon-legacy-438037
	I0914 22:37:50.719684 2874006 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:37:50.721236 2874006 out.go:177] * Pulling base image ...
	I0914 22:37:50.723024 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:37:50.723112 2874006 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:37:50.739885 2874006 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 22:37:50.739907 2874006 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 22:37:50.788734 2874006 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0914 22:37:50.788770 2874006 cache.go:57] Caching tarball of preloaded images
	I0914 22:37:50.788953 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:37:50.791021 2874006 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0914 22:37:50.793038 2874006 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:37:50.890030 2874006 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0914 22:38:00.454953 2874006 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:38:00.455051 2874006 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:38:01.632608 2874006 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0914 22:38:01.632984 2874006 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json ...
	I0914 22:38:01.633021 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json: {Name:mk9b7219ecbd6eb32d8d24c3944a40400fe056ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:01.633213 2874006 cache.go:195] Successfully downloaded all kic artifacts
	I0914 22:38:01.633260 2874006 start.go:365] acquiring machines lock for ingress-addon-legacy-438037: {Name:mke9ca8a8be24e85471f3c976865beeaab0c5876 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:38:01.633321 2874006 start.go:369] acquired machines lock for "ingress-addon-legacy-438037" in 46.137µs
	I0914 22:38:01.633343 2874006 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:38:01.633416 2874006 start.go:125] createHost starting for "" (driver="docker")
	I0914 22:38:01.635844 2874006 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0914 22:38:01.636111 2874006 start.go:159] libmachine.API.Create for "ingress-addon-legacy-438037" (driver="docker")
	I0914 22:38:01.636146 2874006 client.go:168] LocalClient.Create starting
	I0914 22:38:01.636259 2874006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 22:38:01.636295 2874006 main.go:141] libmachine: Decoding PEM data...
	I0914 22:38:01.636315 2874006 main.go:141] libmachine: Parsing certificate...
	I0914 22:38:01.636372 2874006 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 22:38:01.636393 2874006 main.go:141] libmachine: Decoding PEM data...
	I0914 22:38:01.636409 2874006 main.go:141] libmachine: Parsing certificate...
	I0914 22:38:01.636770 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 22:38:01.654950 2874006 cli_runner.go:211] docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 22:38:01.655040 2874006 network_create.go:281] running [docker network inspect ingress-addon-legacy-438037] to gather additional debugging logs...
	I0914 22:38:01.655061 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037
	W0914 22:38:01.678323 2874006 cli_runner.go:211] docker network inspect ingress-addon-legacy-438037 returned with exit code 1
	I0914 22:38:01.678356 2874006 network_create.go:284] error running [docker network inspect ingress-addon-legacy-438037]: docker network inspect ingress-addon-legacy-438037: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-438037 not found
	I0914 22:38:01.678371 2874006 network_create.go:286] output of [docker network inspect ingress-addon-legacy-438037]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-438037 not found
	
	** /stderr **
	I0914 22:38:01.678435 2874006 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:38:01.696924 2874006 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e740}
	I0914 22:38:01.696962 2874006 network_create.go:123] attempt to create docker network ingress-addon-legacy-438037 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 22:38:01.697019 2874006 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 ingress-addon-legacy-438037
	I0914 22:38:01.768185 2874006 network_create.go:107] docker network ingress-addon-legacy-438037 192.168.49.0/24 created
	I0914 22:38:01.768217 2874006 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-438037" container
	I0914 22:38:01.768292 2874006 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 22:38:01.784674 2874006 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-438037 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --label created_by.minikube.sigs.k8s.io=true
	I0914 22:38:01.803217 2874006 oci.go:103] Successfully created a docker volume ingress-addon-legacy-438037
	I0914 22:38:01.803298 2874006 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-438037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --entrypoint /usr/bin/test -v ingress-addon-legacy-438037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib
	I0914 22:38:03.331850 2874006 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-438037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --entrypoint /usr/bin/test -v ingress-addon-legacy-438037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib: (1.528512759s)
	I0914 22:38:03.331883 2874006 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-438037
	I0914 22:38:03.331910 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:38:03.331930 2874006 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 22:38:03.332024 2874006 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-438037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 22:38:08.194674 2874006 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-438037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir: (4.862603416s)
	I0914 22:38:08.194704 2874006 kic.go:199] duration metric: took 4.862772 seconds to extract preloaded images to volume
	W0914 22:38:08.194839 2874006 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 22:38:08.194956 2874006 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 22:38:08.272816 2874006 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-438037 --name ingress-addon-legacy-438037 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-438037 --network ingress-addon-legacy-438037 --ip 192.168.49.2 --volume ingress-addon-legacy-438037:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:38:08.618158 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Running}}
	I0914 22:38:08.638080 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:08.663478 2874006 cli_runner.go:164] Run: docker exec ingress-addon-legacy-438037 stat /var/lib/dpkg/alternatives/iptables
	I0914 22:38:08.737911 2874006 oci.go:144] the created container "ingress-addon-legacy-438037" has a running status.
	I0914 22:38:08.737944 2874006 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa...
	I0914 22:38:09.104703 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0914 22:38:09.104745 2874006 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 22:38:09.142940 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:09.179443 2874006 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 22:38:09.179466 2874006 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-438037 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 22:38:09.278580 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:09.310043 2874006 machine.go:88] provisioning docker machine ...
	I0914 22:38:09.310077 2874006 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-438037"
	I0914 22:38:09.310149 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:09.332042 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:09.332487 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:09.332606 2874006 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-438037 && echo "ingress-addon-legacy-438037" | sudo tee /etc/hostname
	I0914 22:38:09.574346 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-438037
	
	I0914 22:38:09.574481 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:09.605001 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:09.605396 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:09.605417 2874006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-438037' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-438037/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-438037' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:38:09.753846 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:38:09.753873 2874006 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 22:38:09.753892 2874006 ubuntu.go:177] setting up certificates
	I0914 22:38:09.753900 2874006 provision.go:83] configureAuth start
	I0914 22:38:09.753965 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:09.778148 2874006 provision.go:138] copyHostCerts
	I0914 22:38:09.778197 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:38:09.778228 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 22:38:09.778238 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:38:09.778296 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 22:38:09.778370 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:38:09.778389 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 22:38:09.778394 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:38:09.778420 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 22:38:09.778459 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:38:09.778478 2874006 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 22:38:09.778482 2874006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:38:09.778507 2874006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 22:38:09.778563 2874006 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-438037 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-438037]
	I0914 22:38:10.029778 2874006 provision.go:172] copyRemoteCerts
	I0914 22:38:10.029850 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:38:10.029896 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.050785 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.155127 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:38:10.155257 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:38:10.183655 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:38:10.183738 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0914 22:38:10.212397 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:38:10.212461 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:38:10.241684 2874006 provision.go:86] duration metric: configureAuth took 487.768161ms
	I0914 22:38:10.241710 2874006 ubuntu.go:193] setting minikube options for container-runtime
	I0914 22:38:10.241915 2874006 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:38:10.242050 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.260384 2874006 main.go:141] libmachine: Using SSH client type: native
	I0914 22:38:10.260868 2874006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36403 <nil> <nil>}
	I0914 22:38:10.260890 2874006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:38:10.538769 2874006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:38:10.538830 2874006 machine.go:91] provisioned docker machine in 1.228762628s
	I0914 22:38:10.538856 2874006 client.go:171] LocalClient.Create took 8.902703694s
	I0914 22:38:10.538903 2874006 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-438037" took 8.902778779s
	I0914 22:38:10.538929 2874006 start.go:300] post-start starting for "ingress-addon-legacy-438037" (driver="docker")
	I0914 22:38:10.538956 2874006 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:38:10.539058 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:38:10.539126 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.556885 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.659192 2874006 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:38:10.663483 2874006 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 22:38:10.663519 2874006 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 22:38:10.663530 2874006 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 22:38:10.663537 2874006 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 22:38:10.663549 2874006 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 22:38:10.663609 2874006 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 22:38:10.663695 2874006 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 22:38:10.663707 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /etc/ssl/certs/28461092.pem
	I0914 22:38:10.663817 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:38:10.674185 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:38:10.701205 2874006 start.go:303] post-start completed in 162.246747ms
	I0914 22:38:10.701614 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:10.719069 2874006 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/config.json ...
	I0914 22:38:10.719350 2874006 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:38:10.719403 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.736433 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.838331 2874006 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 22:38:10.843774 2874006 start.go:128] duration metric: createHost completed in 9.210340593s
	I0914 22:38:10.843799 2874006 start.go:83] releasing machines lock for "ingress-addon-legacy-438037", held for 9.21046298s
	I0914 22:38:10.843875 2874006 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-438037
	I0914 22:38:10.861318 2874006 ssh_runner.go:195] Run: cat /version.json
	I0914 22:38:10.861334 2874006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:38:10.861377 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.861403 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:10.881747 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:10.888586 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:11.120307 2874006 ssh_runner.go:195] Run: systemctl --version
	I0914 22:38:11.125783 2874006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:38:11.270023 2874006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:38:11.275513 2874006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:38:11.299861 2874006 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 22:38:11.299974 2874006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:38:11.335487 2874006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 22:38:11.335511 2874006 start.go:469] detecting cgroup driver to use...
	I0914 22:38:11.335547 2874006 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 22:38:11.335600 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:38:11.354396 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:38:11.367998 2874006 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:38:11.368093 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:38:11.385119 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:38:11.402249 2874006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:38:11.493050 2874006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:38:11.598056 2874006 docker.go:212] disabling docker service ...
	I0914 22:38:11.598134 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:38:11.619654 2874006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:38:11.633597 2874006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:38:11.740376 2874006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:38:11.839999 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:38:11.853361 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:38:11.872187 2874006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 22:38:11.872281 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.884251 2874006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:38:11.884317 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.896270 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.908242 2874006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:38:11.919918 2874006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:38:11.930773 2874006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:38:11.940651 2874006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:38:11.950284 2874006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:38:12.044547 2874006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:38:12.172148 2874006 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:38:12.172226 2874006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:38:12.177244 2874006 start.go:537] Will wait 60s for crictl version
	I0914 22:38:12.177322 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:12.181788 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:38:12.220570 2874006 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 22:38:12.220673 2874006 ssh_runner.go:195] Run: crio --version
	I0914 22:38:12.264973 2874006 ssh_runner.go:195] Run: crio --version
	I0914 22:38:12.315341 2874006 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0914 22:38:12.316962 2874006 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-438037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:38:12.333951 2874006 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 22:38:12.338293 2874006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:38:12.350930 2874006 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0914 22:38:12.350994 2874006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:38:12.404233 2874006 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 22:38:12.404307 2874006 ssh_runner.go:195] Run: which lz4
	I0914 22:38:12.408770 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0914 22:38:12.408865 2874006 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0914 22:38:12.413302 2874006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 22:38:12.413342 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0914 22:38:14.452258 2874006 crio.go:444] Took 2.043425 seconds to copy over tarball
	I0914 22:38:14.452379 2874006 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 22:38:17.053421 2874006 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.60101232s)
	I0914 22:38:17.053448 2874006 crio.go:451] Took 2.601117 seconds to extract the tarball
	I0914 22:38:17.053458 2874006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 22:38:17.402053 2874006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:38:17.444809 2874006 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0914 22:38:17.444833 2874006 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 22:38:17.444871 2874006 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:17.445074 2874006 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:17.445161 2874006 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:17.445234 2874006 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.445303 2874006 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.445365 2874006 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 22:38:17.445425 2874006 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:17.445520 2874006 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:17.446404 2874006 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:17.446821 2874006 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:17.447089 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.447241 2874006 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 22:38:17.447377 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.447501 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:17.447714 2874006 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:17.447897 2874006 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0914 22:38:17.874747 2874006 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.874925 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.920815 2874006 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0914 22:38:17.920895 2874006 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0914 22:38:17.920963 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:17.925377 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	W0914 22:38:17.930904 2874006 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.931129 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0914 22:38:17.931394 2874006 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.931650 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:17.949896 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0914 22:38:17.959691 2874006 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.959900 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:17.979124 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0914 22:38:17.980223 2874006 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.980472 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0914 22:38:17.980919 2874006 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:17.981100 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0914 22:38:18.014398 2874006 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0914 22:38:18.014618 2874006 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.153331 2874006 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0914 22:38:18.153390 2874006 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:18.153443 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.153518 2874006 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0914 22:38:18.153540 2874006 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:18.153561 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.180702 2874006 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0914 22:38:18.180764 2874006 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 22:38:18.180825 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.180973 2874006 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0914 22:38:18.181019 2874006 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:18.181059 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.195779 2874006 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0914 22:38:18.195843 2874006 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:18.195915 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.196012 2874006 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0914 22:38:18.196044 2874006 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0914 22:38:18.196078 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.289857 2874006 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0914 22:38:18.289936 2874006 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.289986 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0914 22:38:18.289994 2874006 ssh_runner.go:195] Run: which crictl
	I0914 22:38:18.290092 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0914 22:38:18.290132 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0914 22:38:18.290103 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 22:38:18.290173 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0914 22:38:18.290205 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0914 22:38:18.438497 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0914 22:38:18.438563 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0914 22:38:18.438613 2874006 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:18.438688 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0914 22:38:18.441986 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0914 22:38:18.442080 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0914 22:38:18.447133 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0914 22:38:18.506234 2874006 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 22:38:18.506306 2874006 cache_images.go:92] LoadImages completed in 1.061458724s
	W0914 22:38:18.506367 2874006 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0914 22:38:18.506446 2874006 ssh_runner.go:195] Run: crio config
	I0914 22:38:18.572127 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:38:18.572197 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:38:18.572245 2874006 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:38:18.572288 2874006 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-438037 NodeName:ingress-addon-legacy-438037 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 22:38:18.572471 2874006 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-438037"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:38:18.572602 2874006 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-438037 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:38:18.572689 2874006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0914 22:38:18.583132 2874006 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:38:18.583227 2874006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:38:18.593542 2874006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0914 22:38:18.613545 2874006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0914 22:38:18.633331 2874006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0914 22:38:18.653222 2874006 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 22:38:18.657536 2874006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:38:18.670558 2874006 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037 for IP: 192.168.49.2
	I0914 22:38:18.670587 2874006 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:18.670725 2874006 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 22:38:18.670770 2874006 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 22:38:18.670817 2874006 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key
	I0914 22:38:18.670834 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt with IP's: []
	I0914 22:38:19.236344 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt ...
	I0914 22:38:19.236371 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: {Name:mkc07f926e47dd7d4a3a52c66086888f6611c161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.236594 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key ...
	I0914 22:38:19.236607 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key: {Name:mk443293d6a4cc6d753f8ccc8849273d56660101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.236698 2874006 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2
	I0914 22:38:19.236718 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 22:38:19.679754 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 ...
	I0914 22:38:19.679783 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2: {Name:mkd57d3090f93d8fab2f514d3f90d19e1e49e7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.679966 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2 ...
	I0914 22:38:19.679979 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2: {Name:mk890fa1d3c6c217dab198b706f5e63d213e8bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:19.680068 2874006 certs.go:337] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt
	I0914 22:38:19.680146 2874006 certs.go:341] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key
	I0914 22:38:19.680205 2874006 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key
	I0914 22:38:19.680221 2874006 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt with IP's: []
	I0914 22:38:20.315404 2874006 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt ...
	I0914 22:38:20.315433 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt: {Name:mk2ddfde646fde62c20235a38ba8af63e946e80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:20.315619 2874006 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key ...
	I0914 22:38:20.315632 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key: {Name:mk82780cd8bda8c67e216cb828a35fd78be8194b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:20.315710 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 22:38:20.315726 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 22:38:20.315738 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 22:38:20.315752 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 22:38:20.315764 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:38:20.315782 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:38:20.315794 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:38:20.315805 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:38:20.315863 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 22:38:20.315903 2874006 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 22:38:20.315917 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:38:20.315953 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:38:20.315987 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:38:20.316016 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 22:38:20.316062 2874006 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:38:20.316098 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.316114 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem -> /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.316126 2874006 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.316758 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:38:20.344570 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 22:38:20.372606 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:38:20.400626 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 22:38:20.427775 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:38:20.455569 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 22:38:20.485024 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:38:20.515114 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:38:20.544955 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:38:20.573029 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 22:38:20.600590 2874006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 22:38:20.627407 2874006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:38:20.647610 2874006 ssh_runner.go:195] Run: openssl version
	I0914 22:38:20.654411 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:38:20.666144 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.670862 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.670923 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:38:20.679378 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:38:20.691165 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 22:38:20.702694 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.707326 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.707436 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 22:38:20.715864 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 22:38:20.727508 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 22:38:20.739071 2874006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.743724 2874006 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.743787 2874006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 22:38:20.752032 2874006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:38:20.763443 2874006 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:38:20.767732 2874006 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:38:20.767782 2874006 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-438037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-438037 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:38:20.767855 2874006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:38:20.767913 2874006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:38:20.813940 2874006 cri.go:89] found id: ""
	I0914 22:38:20.814007 2874006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:38:20.824712 2874006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:38:20.835192 2874006 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 22:38:20.835276 2874006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:38:20.845640 2874006 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:38:20.845735 2874006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 22:38:20.901714 2874006 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0914 22:38:20.902001 2874006 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:38:20.954183 2874006 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:38:20.954278 2874006 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:38:20.954316 2874006 kubeadm.go:322] OS: Linux
	I0914 22:38:20.954362 2874006 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 22:38:20.954410 2874006 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 22:38:20.954458 2874006 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 22:38:20.954512 2874006 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 22:38:20.954560 2874006 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 22:38:20.954612 2874006 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 22:38:21.044516 2874006 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:38:21.044623 2874006 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:38:21.044714 2874006 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:38:21.274414 2874006 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:38:21.275809 2874006 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:38:21.276062 2874006 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:38:21.384887 2874006 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:38:21.390189 2874006 out.go:204]   - Generating certificates and keys ...
	I0914 22:38:21.390379 2874006 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:38:21.390483 2874006 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:38:22.388587 2874006 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 22:38:22.682617 2874006 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 22:38:23.047196 2874006 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 22:38:23.726309 2874006 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 22:38:23.877082 2874006 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 22:38:23.877705 2874006 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-438037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:38:25.534565 2874006 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 22:38:25.534935 2874006 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-438037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 22:38:26.379184 2874006 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 22:38:26.584991 2874006 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 22:38:27.187194 2874006 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 22:38:27.187436 2874006 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:38:27.525918 2874006 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:38:28.245290 2874006 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:38:28.942501 2874006 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:38:29.718592 2874006 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:38:29.719274 2874006 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:38:29.721890 2874006 out.go:204]   - Booting up control plane ...
	I0914 22:38:29.721986 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:38:29.737480 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:38:29.739104 2874006 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:38:29.740296 2874006 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:38:29.743140 2874006 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:38:41.247068 2874006 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502590 seconds
	I0914 22:38:41.247189 2874006 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:38:41.257140 2874006 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:38:41.786435 2874006 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:38:41.786584 2874006 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-438037 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0914 22:38:42.297945 2874006 kubeadm.go:322] [bootstrap-token] Using token: jnwj02.72mnz06o7v62mu14
	I0914 22:38:42.300132 2874006 out.go:204]   - Configuring RBAC rules ...
	I0914 22:38:42.300255 2874006 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:38:42.303885 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:38:42.317761 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:38:42.324842 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:38:42.335074 2874006 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:38:42.339072 2874006 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:38:42.351714 2874006 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:38:42.662180 2874006 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:38:42.799357 2874006 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:38:42.799375 2874006 kubeadm.go:322] 
	I0914 22:38:42.799465 2874006 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:38:42.799487 2874006 kubeadm.go:322] 
	I0914 22:38:42.799582 2874006 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:38:42.799593 2874006 kubeadm.go:322] 
	I0914 22:38:42.799629 2874006 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:38:42.799702 2874006 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:38:42.799750 2874006 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:38:42.799755 2874006 kubeadm.go:322] 
	I0914 22:38:42.799809 2874006 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:38:42.799900 2874006 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:38:42.799976 2874006 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:38:42.799991 2874006 kubeadm.go:322] 
	I0914 22:38:42.800079 2874006 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:38:42.800169 2874006 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:38:42.800177 2874006 kubeadm.go:322] 
	I0914 22:38:42.800259 2874006 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jnwj02.72mnz06o7v62mu14 \
	I0914 22:38:42.800363 2874006 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 22:38:42.800388 2874006 kubeadm.go:322]     --control-plane 
	I0914 22:38:42.800395 2874006 kubeadm.go:322] 
	I0914 22:38:42.800475 2874006 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:38:42.800506 2874006 kubeadm.go:322] 
	I0914 22:38:42.800584 2874006 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jnwj02.72mnz06o7v62mu14 \
	I0914 22:38:42.800687 2874006 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:38:42.803411 2874006 kubeadm.go:322] W0914 22:38:20.900891    1226 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0914 22:38:42.803625 2874006 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:38:42.803727 2874006 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:38:42.803855 2874006 kubeadm.go:322] W0914 22:38:29.737141    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 22:38:42.803983 2874006 kubeadm.go:322] W0914 22:38:29.739186    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0914 22:38:42.804001 2874006 cni.go:84] Creating CNI manager for ""
	I0914 22:38:42.804010 2874006 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:38:42.805875 2874006 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 22:38:42.807684 2874006 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:38:42.812456 2874006 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0914 22:38:42.812479 2874006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:38:42.834382 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:38:43.282661 2874006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:38:43.282776 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.282777 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=ingress-addon-legacy-438037 minikube.k8s.io/updated_at=2023_09_14T22_38_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.422303 2874006 ops.go:34] apiserver oom_adj: -16
	I0914 22:38:43.422418 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:43.514690 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:44.106220 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:44.605883 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:45.106062 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:45.606239 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:46.106220 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:46.605622 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:47.105824 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:47.606650 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:48.106512 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:48.606477 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:49.106530 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:49.606481 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:50.106598 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:50.606101 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:51.106250 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:51.606645 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:52.105792 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:52.606326 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:53.105652 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:53.606184 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:54.105657 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:54.606438 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:55.106448 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:55.606024 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:56.105650 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:56.605667 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:57.105732 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:57.605898 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:58.106117 2874006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:38:58.212763 2874006 kubeadm.go:1081] duration metric: took 14.930045065s to wait for elevateKubeSystemPrivileges.
	I0914 22:38:58.212794 2874006 kubeadm.go:406] StartCluster complete in 37.445016104s
	I0914 22:38:58.212811 2874006 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:58.212868 2874006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:38:58.213577 2874006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:38:58.214272 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.215625 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:38:58.215863 2874006 config.go:182] Loaded profile config "ingress-addon-legacy-438037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0914 22:38:58.215900 2874006 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:38:58.215957 2874006 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-438037"
	I0914 22:38:58.215971 2874006 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-438037"
	I0914 22:38:58.216026 2874006 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:38:58.216473 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.216989 2874006 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 22:38:58.217031 2874006 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-438037"
	I0914 22:38:58.217048 2874006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-438037"
	I0914 22:38:58.217309 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.264394 2874006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:38:58.266436 2874006 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:38:58.266458 2874006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:38:58.266534 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:58.278577 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.297252 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	W0914 22:38:58.334273 2874006 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-438037" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0914 22:38:58.334303 2874006 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0914 22:38:58.334326 2874006 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:38:58.336481 2874006 out.go:177] * Verifying Kubernetes components...
	I0914 22:38:58.338863 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:38:58.340645 2874006 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-438037"
	I0914 22:38:58.340679 2874006 host.go:66] Checking if "ingress-addon-legacy-438037" exists ...
	I0914 22:38:58.341120 2874006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-438037 --format={{.State.Status}}
	I0914 22:38:58.371511 2874006 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:38:58.371536 2874006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:38:58.371598 2874006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-438037
	I0914 22:38:58.407047 2874006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36403 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/ingress-addon-legacy-438037/id_rsa Username:docker}
	I0914 22:38:58.453507 2874006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:38:58.495061 2874006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:38:58.495553 2874006 kapi.go:59] client config for ingress-addon-legacy-438037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:38:58.495807 2874006 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-438037" to be "Ready" ...
	I0914 22:38:58.641109 2874006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:38:59.093546 2874006 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 22:38:59.097066 2874006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 22:38:59.100985 2874006 addons.go:502] enable addons completed in 885.070906ms: enabled=[storage-provisioner default-storageclass]
	I0914 22:39:00.567086 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:03.065939 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:05.066123 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:07.566778 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:10.066400 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:12.066940 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:14.566048 2874006 node_ready.go:58] node "ingress-addon-legacy-438037" has status "Ready":"False"
	I0914 22:39:16.566084 2874006 node_ready.go:49] node "ingress-addon-legacy-438037" has status "Ready":"True"
	I0914 22:39:16.566111 2874006 node_ready.go:38] duration metric: took 18.07028321s waiting for node "ingress-addon-legacy-438037" to be "Ready" ...
	I0914 22:39:16.566123 2874006 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:39:16.574537 2874006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:18.581880 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-14 22:38:58 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 22:39:20.584675 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:22.584731 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:25.084619 2874006 pod_ready.go:102] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:25.591194 2874006 pod_ready.go:92] pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:25.591220 2874006 pod_ready.go:81] duration metric: took 9.01664169s waiting for pod "coredns-66bff467f8-5vlzt" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:25.591231 2874006 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:27.609926 2874006 pod_ready.go:102] pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace has status "Ready":"False"
	I0914 22:39:29.610327 2874006 pod_ready.go:92] pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.610352 2874006 pod_ready.go:81] duration metric: took 4.019113284s waiting for pod "coredns-66bff467f8-hzd5r" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.610364 2874006 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.614890 2874006 pod_ready.go:92] pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.614914 2874006 pod_ready.go:81] duration metric: took 4.541863ms waiting for pod "etcd-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.614928 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.619327 2874006 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.619348 2874006 pod_ready.go:81] duration metric: took 4.412616ms waiting for pod "kube-apiserver-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.619359 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.623761 2874006 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.623782 2874006 pod_ready.go:81] duration metric: took 4.416194ms waiting for pod "kube-controller-manager-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.623792 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79mhd" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.628226 2874006 pod_ready.go:92] pod "kube-proxy-79mhd" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:29.628244 2874006 pod_ready.go:81] duration metric: took 4.445379ms waiting for pod "kube-proxy-79mhd" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.628254 2874006 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:29.805622 2874006 request.go:629] Waited for 177.293206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-438037
	I0914 22:39:30.005814 2874006 request.go:629] Waited for 197.349398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-438037
	I0914 22:39:30.008713 2874006 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace has status "Ready":"True"
	I0914 22:39:30.008738 2874006 pod_ready.go:81] duration metric: took 380.477176ms waiting for pod "kube-scheduler-ingress-addon-legacy-438037" in "kube-system" namespace to be "Ready" ...
	I0914 22:39:30.008751 2874006 pod_ready.go:38] duration metric: took 13.442617969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:39:30.008769 2874006 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:39:30.008829 2874006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:39:30.023339 2874006 api_server.go:72] duration metric: took 31.688975029s to wait for apiserver process to appear ...
	I0914 22:39:30.023366 2874006 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:39:30.023385 2874006 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 22:39:30.032956 2874006 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 22:39:30.033866 2874006 api_server.go:141] control plane version: v1.18.20
	I0914 22:39:30.033891 2874006 api_server.go:131] duration metric: took 10.516849ms to wait for apiserver health ...
	I0914 22:39:30.033900 2874006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:39:30.205236 2874006 request.go:629] Waited for 171.23182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:39:30.211376 2874006 system_pods.go:59] 9 kube-system pods found
	I0914 22:39:30.211411 2874006 system_pods.go:61] "coredns-66bff467f8-5vlzt" [6e80d32c-0f03-48b3-a30a-21f772c3a5c1] Running
	I0914 22:39:30.211418 2874006 system_pods.go:61] "coredns-66bff467f8-hzd5r" [6df64232-0e4b-4f95-863f-8195e0b19ed6] Running
	I0914 22:39:30.211424 2874006 system_pods.go:61] "etcd-ingress-addon-legacy-438037" [dd33171a-d5ff-434f-95b7-48f30add3ebb] Running
	I0914 22:39:30.211429 2874006 system_pods.go:61] "kindnet-ft9s6" [d5386d34-1bfd-488c-a959-d4847ddb8a76] Running
	I0914 22:39:30.211435 2874006 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-438037" [15c1a842-b237-4448-b215-17be2692d221] Running
	I0914 22:39:30.211440 2874006 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-438037" [dcf4fd51-6e75-46c4-93c4-59e6ef3deb4c] Running
	I0914 22:39:30.211445 2874006 system_pods.go:61] "kube-proxy-79mhd" [a9cc9c4a-d968-4403-a34b-9ea2c671326f] Running
	I0914 22:39:30.211450 2874006 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-438037" [61e4ecc7-a0f5-412c-ad2f-e3e5cce42226] Running
	I0914 22:39:30.211461 2874006 system_pods.go:61] "storage-provisioner" [0a1d1b79-2747-4d8d-8b93-c687e75482f0] Running
	I0914 22:39:30.211471 2874006 system_pods.go:74] duration metric: took 177.564838ms to wait for pod list to return data ...
	I0914 22:39:30.211482 2874006 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:39:30.405891 2874006 request.go:629] Waited for 194.317548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:39:30.408360 2874006 default_sa.go:45] found service account: "default"
	I0914 22:39:30.408391 2874006 default_sa.go:55] duration metric: took 196.899051ms for default service account to be created ...
	I0914 22:39:30.408400 2874006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:39:30.605661 2874006 request.go:629] Waited for 197.196527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:39:30.612068 2874006 system_pods.go:86] 9 kube-system pods found
	I0914 22:39:30.612101 2874006 system_pods.go:89] "coredns-66bff467f8-5vlzt" [6e80d32c-0f03-48b3-a30a-21f772c3a5c1] Running
	I0914 22:39:30.612107 2874006 system_pods.go:89] "coredns-66bff467f8-hzd5r" [6df64232-0e4b-4f95-863f-8195e0b19ed6] Running
	I0914 22:39:30.612113 2874006 system_pods.go:89] "etcd-ingress-addon-legacy-438037" [dd33171a-d5ff-434f-95b7-48f30add3ebb] Running
	I0914 22:39:30.612117 2874006 system_pods.go:89] "kindnet-ft9s6" [d5386d34-1bfd-488c-a959-d4847ddb8a76] Running
	I0914 22:39:30.612122 2874006 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-438037" [15c1a842-b237-4448-b215-17be2692d221] Running
	I0914 22:39:30.612134 2874006 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-438037" [dcf4fd51-6e75-46c4-93c4-59e6ef3deb4c] Running
	I0914 22:39:30.612139 2874006 system_pods.go:89] "kube-proxy-79mhd" [a9cc9c4a-d968-4403-a34b-9ea2c671326f] Running
	I0914 22:39:30.612144 2874006 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-438037" [61e4ecc7-a0f5-412c-ad2f-e3e5cce42226] Running
	I0914 22:39:30.612152 2874006 system_pods.go:89] "storage-provisioner" [0a1d1b79-2747-4d8d-8b93-c687e75482f0] Running
	I0914 22:39:30.612159 2874006 system_pods.go:126] duration metric: took 203.753282ms to wait for k8s-apps to be running ...
	I0914 22:39:30.612173 2874006 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:39:30.612234 2874006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:39:30.625811 2874006 system_svc.go:56] duration metric: took 13.625926ms WaitForService to wait for kubelet.
	I0914 22:39:30.625876 2874006 kubeadm.go:581] duration metric: took 32.291520904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:39:30.625910 2874006 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:39:30.805158 2874006 request.go:629] Waited for 179.157706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0914 22:39:30.807966 2874006 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:39:30.807998 2874006 node_conditions.go:123] node cpu capacity is 2
	I0914 22:39:30.808011 2874006 node_conditions.go:105] duration metric: took 182.089134ms to run NodePressure ...
	I0914 22:39:30.808023 2874006 start.go:228] waiting for startup goroutines ...
	I0914 22:39:30.808029 2874006 start.go:233] waiting for cluster config update ...
	I0914 22:39:30.808039 2874006 start.go:242] writing updated cluster config ...
	I0914 22:39:30.808324 2874006 ssh_runner.go:195] Run: rm -f paused
	I0914 22:39:30.862128 2874006 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0914 22:39:30.865248 2874006 out.go:177] 
	W0914 22:39:30.867843 2874006 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0914 22:39:30.870043 2874006 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0914 22:39:30.872489 2874006 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-438037" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 14 22:45:34 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:34.081507340Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=d2da98b0-d2a6-47ea-8c66-ad5111d759fb name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:35 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:35.471448086Z" level=info msg="Running pod sandbox: kube-system/kube-ingress-dns-minikube/POD" id=74c059da-1a51-4136-a0a5-448b6ffdc6bf name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 14 22:45:35 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:35.471511791Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 22:45:35 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:35.507458837Z" level=info msg="Ran pod sandbox 18dafff1089684e703369542f54b1cfa074a92abd28221197f30f088069e756e with infra container: kube-system/kube-ingress-dns-minikube/POD" id=74c059da-1a51-4136-a0a5-448b6ffdc6bf name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Sep 14 22:45:35 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:35.508532141Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=cbcc542f-59e3-4be8-b837-1cbe7df8c1c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:35 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:35.612445841Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=a1ff6e8e-e475-4c4d-907c-0c5071e092a2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:46 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:46.082246688Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=13b975ef-fe86-46f8-94d5-9e9abe9bd34e name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:46 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:46.084089165Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a6886c60-748c-4263-aa36-99c72aa534a0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:46 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:46.084336631Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a6886c60-748c-4263-aa36-99c72aa534a0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:58 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:58.081245795Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=f33739f2-6eac-46b3-b390-9c2096cadd46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:45:58 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:45:58.081525836Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=f33739f2-6eac-46b3-b390-9c2096cadd46 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:01 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:01.081212053Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=fe28cf96-6cac-43e7-a283-8c77e770dfc0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:15 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:15.081215526Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=016af5ab-1f7c-4399-8453-fc8e70d9c92b name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:28.081272379Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=c5d0aa5d-6f5d-4dd1-a0fd-cc123462fe67 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:28.575465840Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a51194e0-5193-4dc5-b305-8118ee2b90eb name=/runtime.v1alpha2.ImageService/PullImage
	Sep 14 22:46:28 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:28.577344312Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:46:39 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:39.081238504Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=35f938ab-3cf5-4a93-93d2-9aa9fc61f31f name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:42 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:42.081159522Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ceb1d598-2be4-4a31-8c8c-4c8f62be3eb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:42 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:42.081450566Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ceb1d598-2be4-4a31-8c8c-4c8f62be3eb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:51 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:51.081217690Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=019ce344-1138-4552-8127-d10bf32f892c name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:54 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:54.081284986Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=3f3f86e1-3ec7-44a4-a7dc-493ce227fb23 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:46:54 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:46:54.081550200Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=3f3f86e1-3ec7-44a4-a7dc-493ce227fb23 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:47:05 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:47:05.081334864Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8b9c9ead-910c-4a1e-8a0a-46ff523a9705 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:47:05 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:47:05.081608382Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8b9c9ead-910c-4a1e-8a0a-46ff523a9705 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 14 22:47:06 ingress-addon-legacy-438037 crio[894]: time="2023-09-14 22:47:06.081849332Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=b4baf949-cb9b-4e26-8bdf-dd759360d64d name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cafe9f18505ce       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   0f1b0c9298086       storage-provisioner
	b1e4183cba37c       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   ccb2db598c723       coredns-66bff467f8-hzd5r
	9f402e75947ee       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   c04499f3b2a79       coredns-66bff467f8-5vlzt
	3f27f63906e23       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                8 minutes ago       Running             kindnet-cni               0                   aaaa4ba223b42       kindnet-ft9s6
	780e22127b8db       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  8 minutes ago       Running             kube-proxy                0                   5906684b7acd1       kube-proxy-79mhd
	623b6b437d505       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   197b4ed6dc804       kube-apiserver-ingress-addon-legacy-438037
	81d8212acfd52       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   211a90213946c       kube-scheduler-ingress-addon-legacy-438037
	83f98203d414d       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   b280add1026b6       kube-controller-manager-ingress-addon-legacy-438037
	55334ffa86b91       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   45478d10af744       etcd-ingress-addon-legacy-438037
	
	* 
	* ==> coredns [9f402e75947ee904968f7e9e180fab397a2506e694d6b9a57d9c7bf1a73c9b32] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:35939 - 5770 "HINFO IN 8229871420030904252.7931612907154627456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03709457s
	
	* 
	* ==> coredns [b1e4183cba37c7a4a2dc1f88d09a2f9aa668e181cd6dae13939244675ea721ba] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:53976 - 51578 "HINFO IN 2214536823561160362.6027128080179966188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022974876s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-438037
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-438037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=ingress-addon-legacy-438037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_38_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-438037
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:47:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:38:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:44:46 +0000   Thu, 14 Sep 2023 22:39:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-438037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 88171635ae7c44a0b058e3522c445eb5
	  System UUID:                e886bf26-0baa-409c-95b7-680bfcd56e0f
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-ghrnm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-h4zhs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-s8f7c              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-5vlzt                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m8s
	  kube-system                 coredns-66bff467f8-hzd5r                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m8s
	  kube-system                 etcd-ingress-addon-legacy-438037                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kindnet-ft9s6                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m9s
	  kube-system                 kube-apiserver-ingress-addon-legacy-438037             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-438037    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-79mhd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kube-scheduler-ingress-addon-legacy-438037             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             280Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m35s (x4 over 8m35s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x5 over 8m35s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x4 over 8m35s)  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet     Node ingress-addon-legacy-438037 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m7s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m50s                  kubelet     Node ingress-addon-legacy-438037 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001074] FS-Cache: O-key=[8] '85703b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000899] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001017] FS-Cache: N-key=[8] '85703b0000000000'
	[  +2.012590] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000000e476c3
	[  +0.001056] FS-Cache: O-key=[8] '84703b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.001024] FS-Cache: N-key=[8] '84703b0000000000'
	[  +0.406786] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000007a274cdd
	[  +0.001021] FS-Cache: O-key=[8] '8a703b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000038968ff8
	[  +0.001006] FS-Cache: N-key=[8] '8a703b0000000000'
	[  +4.128718] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=000000ea [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000fe6607cc{9P.session} n=000000001f02128f
	[  +0.001183] FS-Cache: O-key=[10] '34333134393838363731'
	[  +0.000776] FS-Cache: N-cookie c=000000eb [p=00000002 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=00000000fe6607cc{9P.session} n=00000000648dde5c
	[  +0.001093] FS-Cache: N-key=[10] '34333134393838363731'
	
	* 
	* ==> etcd [55334ffa86b91fe0538de4270106091fbede771928d115dc24738d4268024154] <==
	* raft2023/09/14 22:38:32 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/14 22:38:32 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 22:38:32.988159 W | auth: simple token is not cryptographically signed
	2023-09-14 22:38:32.993053 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-14 22:38:32.994110 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-14 22:38:32.995688 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	raft2023/09/14 22:38:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-14 22:38:32.995860 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-09-14 22:38:32.995963 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-14 22:38:32.996029 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/14 22:38:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/14 22:38:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-14 22:38:33.902080 I | etcdserver: published {Name:ingress-addon-legacy-438037 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-14 22:38:33.936549 I | embed: ready to serve client requests
	2023-09-14 22:38:33.981814 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-14 22:38:34.126674 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-14 22:38:34.217415 I | embed: ready to serve client requests
	2023-09-14 22:38:34.316518 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-14 22:38:34.316703 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-14 22:38:34.348020 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  22:47:06 up 22:29,  0 users,  load average: 0.17, 0.37, 1.07
	Linux ingress-addon-legacy-438037 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3f27f63906e23bbd4a0bfdbbb2f77e9e07b0a2d175cadc6f0676cdd788aa947d] <==
	* I0914 22:45:02.146032       1 main.go:227] handling current node
	I0914 22:45:12.155524       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:12.155552       1 main.go:227] handling current node
	I0914 22:45:22.167674       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:22.167700       1 main.go:227] handling current node
	I0914 22:45:32.179930       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:32.179960       1 main.go:227] handling current node
	I0914 22:45:42.190992       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:42.191023       1 main.go:227] handling current node
	I0914 22:45:52.194558       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:45:52.194587       1 main.go:227] handling current node
	I0914 22:46:02.197867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:02.197893       1 main.go:227] handling current node
	I0914 22:46:12.204042       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:12.204072       1 main.go:227] handling current node
	I0914 22:46:22.207107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:22.207139       1 main.go:227] handling current node
	I0914 22:46:32.218071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:32.218101       1 main.go:227] handling current node
	I0914 22:46:42.228474       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:42.228553       1 main.go:227] handling current node
	I0914 22:46:52.235689       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:46:52.235717       1 main.go:227] handling current node
	I0914 22:47:02.239556       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0914 22:47:02.239585       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [623b6b437d50508629b05820596abf28e9c10a1718b5b4657100c55687a897e3] <==
	* I0914 22:38:39.765305       1 establishing_controller.go:76] Starting EstablishingController
	I0914 22:38:39.765321       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I0914 22:38:39.765338       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 22:38:39.839668       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:38:39.839959       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 22:38:39.839985       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0914 22:38:39.857395       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0914 22:38:39.857423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 22:38:40.724327       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 22:38:40.724353       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 22:38:40.730749       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0914 22:38:40.733588       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0914 22:38:40.733609       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0914 22:38:41.100849       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:38:41.137382       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 22:38:41.286792       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0914 22:38:41.287704       1 controller.go:609] quota admission added evaluator for: endpoints
	I0914 22:38:41.290588       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:38:42.127052       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0914 22:38:42.638413       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0914 22:38:42.734517       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0914 22:38:46.044853       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:38:57.802109       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0914 22:38:58.189041       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0914 22:39:31.825384       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [83f98203d414d696e14b2711695f2c5a7d9d3c5076b22c1290bfe89285f9ead5] <==
	* W0914 22:38:57.991523       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-438037. Assuming now as a timestamp.
	I0914 22:38:57.991567       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0914 22:38:57.991777       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0914 22:38:57.992229       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-438037", UID:"2bb82b25-7e67-4c54-a542-14588ce226a3", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-438037 event: Registered Node ingress-addon-legacy-438037 in Controller
	I0914 22:38:58.040606       1 request.go:621] Throttling request took 1.001863264s, request: GET:https://control-plane.minikube.internal:8443/apis/policy/v1beta1?timeout=32s
	I0914 22:38:58.185872       1 shared_informer.go:230] Caches are synced for deployment 
	I0914 22:38:58.192048       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"875d0043-6e29-40db-add2-ed41ecc45680", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0914 22:38:58.192768       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0914 22:38:58.228406       1 shared_informer.go:230] Caches are synced for disruption 
	I0914 22:38:58.228433       1 disruption.go:339] Sending events to api server.
	I0914 22:38:58.235092       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0914 22:38:58.283132       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4cec48ed-2f94-43f6-b197-c13e7c73ff54", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-hzd5r
	I0914 22:38:58.286588       1 shared_informer.go:230] Caches are synced for HPA 
	I0914 22:38:58.342658       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4cec48ed-2f94-43f6-b197-c13e7c73ff54", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-5vlzt
	I0914 22:38:58.345199       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:38:58.409364       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:38:58.409538       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0914 22:38:58.412725       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 22:38:58.692900       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0914 22:38:58.692946       1 shared_informer.go:230] Caches are synced for resource quota 
	I0914 22:39:17.992419       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0914 22:39:31.798162       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e909c5a6-2d53-4567-a107-575a5f6707f7", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0914 22:39:31.816010       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3333c8f2-1cc7-45f3-9e1b-d4b53cf0d3f8", APIVersion:"apps/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-s8f7c
	I0914 22:39:31.869615       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"bab0da64-8cc0-4661-90fe-f556e7025d46", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ghrnm
	I0914 22:39:31.933935       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"dad9e579-0d55-4a41-b241-821cb4d3d12e", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h4zhs
	
	* 
	* ==> kube-proxy [780e22127b8db39f795a28700fe9c214d23132f05f2136225f3d2f7375563543] <==
	* W0914 22:38:59.016610       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0914 22:38:59.047645       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0914 22:38:59.047699       1 server_others.go:186] Using iptables Proxier.
	I0914 22:38:59.054629       1 server.go:583] Version: v1.18.20
	I0914 22:38:59.056768       1 config.go:315] Starting service config controller
	I0914 22:38:59.056790       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0914 22:38:59.057487       1 config.go:133] Starting endpoints config controller
	I0914 22:38:59.057508       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0914 22:38:59.157105       1 shared_informer.go:230] Caches are synced for service config 
	I0914 22:38:59.157705       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [81d8212acfd52e5c3e834537545ddd573c4cd0d0ae674e5fd6a6d2f318429c5f] <==
	* W0914 22:38:39.860166       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 22:38:39.860212       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 22:38:39.901203       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:38:39.901225       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0914 22:38:39.903646       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0914 22:38:39.903935       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:38:39.903952       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 22:38:39.903977       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 22:38:39.906140       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:38:39.912169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 22:38:39.912355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:39.913053       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:38:39.913148       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:38:39.913251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:38:39.913391       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 22:38:39.913484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:38:39.913576       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:38:39.913661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:38:39.913743       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:39.913938       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:38:40.846790       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:38:40.953053       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:38:40.957509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0914 22:38:41.404018       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0914 22:38:58.600602       1 factory.go:503] pod: kube-system/coredns-66bff467f8-5vlzt is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.081810    1632 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.081838    1632 pod_workers.go:191] Error syncing pod 7233a425-56ff-47e9-8a72-8ca20ab81fa7 ("kube-ingress-dns-minikube_kube-system(7233a425-56ff-47e9-8a72-8ca20ab81fa7)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.574748    1632 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.574813    1632 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.574987    1632 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:28 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:28.575020    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Sep 14 22:46:39 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:39.081577    1632 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:39 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:39.081618    1632 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:39 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:39.081660    1632 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:39 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:39.081690    1632 pod_workers.go:191] Error syncing pod 7233a425-56ff-47e9-8a72-8ca20ab81fa7 ("kube-ingress-dns-minikube_kube-system(7233a425-56ff-47e9-8a72-8ca20ab81fa7)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 14 22:46:42 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:42.081708    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:46:51 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:51.081559    1632 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:51 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:51.081598    1632 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:51 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:51.081645    1632 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:46:51 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:51.081683    1632 pod_workers.go:191] Error syncing pod 7233a425-56ff-47e9-8a72-8ca20ab81fa7 ("kube-ingress-dns-minikube_kube-system(7233a425-56ff-47e9-8a72-8ca20ab81fa7)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 14 22:46:54 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:54.081906    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:46:58 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:58.843185    1632 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:58 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:58.843249    1632 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:58 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:58.843310    1632 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Sep 14 22:46:58 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:46:58.843342    1632 pod_workers.go:191] Error syncing pod 6a2c0ddc-a24f-4666-8814-d96ed3d667ab ("ingress-nginx-admission-patch-h4zhs_ingress-nginx(6a2c0ddc-a24f-4666-8814-d96ed3d667ab)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Sep 14 22:47:05 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:47:05.081814    1632 pod_workers.go:191] Error syncing pod 5ad84a26-a101-483d-bea0-10d00c66a1a3 ("ingress-nginx-admission-create-ghrnm_ingress-nginx(5ad84a26-a101-483d-bea0-10d00c66a1a3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Sep 14 22:47:06 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:47:06.082148    1632 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:47:06 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:47:06.082186    1632 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:47:06 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:47:06.082225    1632 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 14 22:47:06 ingress-addon-legacy-438037 kubelet[1632]: E0914 22:47:06.082254    1632 pod_workers.go:191] Error syncing pod 7233a425-56ff-47e9-8a72-8ca20ab81fa7 ("kube-ingress-dns-minikube_kube-system(7233a425-56ff-47e9-8a72-8ca20ab81fa7)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	
	* 
	* ==> storage-provisioner [cafe9f18505ce7504a6f56982bfd9776971ef136689d6a2a7586815095c34739] <==
	* I0914 22:39:23.648342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 22:39:23.664111       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 22:39:23.664196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 22:39:23.671255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 22:39:23.671509       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d!
	I0914 22:39:23.676553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc667857-4d30-4b1a-bf28-250208f6dcee", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d became leader
	I0914 22:39:23.772266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-438037_59b811ee-48ed-4113-aea1-3e7b799f143d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-438037 -n ingress-addon-legacy-438037
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-438037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c kube-ingress-dns-minikube: exit status 1 (82.103638ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ghrnm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h4zhs" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-s8f7c" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-438037 describe pod ingress-nginx-admission-create-ghrnm ingress-nginx-admission-patch-h4zhs ingress-nginx-controller-7fcf777cb7-s8f7c kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- sh -c "ping -c 1 192.168.58.1": exit status 1 (228.421171ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-fkf4t): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (242.064946ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-grlb8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-174950
helpers_test.go:235: (dbg) docker inspect multinode-174950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715",
	        "Created": "2023-09-14T22:53:03.486328464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2910074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T22:53:03.795227988Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/hostname",
	        "HostsPath": "/var/lib/docker/containers/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/hosts",
	        "LogPath": "/var/lib/docker/containers/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715-json.log",
	        "Name": "/multinode-174950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-174950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-174950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e4270d9f8358af81dd2b76bf48735757e1c9fbd8884a4b25f29169a89c3bc872-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4270d9f8358af81dd2b76bf48735757e1c9fbd8884a4b25f29169a89c3bc872/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4270d9f8358af81dd2b76bf48735757e1c9fbd8884a4b25f29169a89c3bc872/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4270d9f8358af81dd2b76bf48735757e1c9fbd8884a4b25f29169a89c3bc872/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-174950",
	                "Source": "/var/lib/docker/volumes/multinode-174950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-174950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-174950",
	                "name.minikube.sigs.k8s.io": "multinode-174950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90037aa6ee09d8112dd0c50e10e8e3dbaac867caf2647b9db5b960cd98f1e468",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36460"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/90037aa6ee09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-174950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5804e744fc84",
	                        "multinode-174950"
	                    ],
	                    "NetworkID": "3d3dcc9eef60f3296a0752084a8c3b73293b6a7b2ba94a2d0d9d24e429e1e9b8",
	                    "EndpointID": "4919b3bad50358bc9afab7fafc8cde1ff873070baca086f50038479caa4213a6",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-174950 -n multinode-174950
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-174950 logs -n 25: (1.565938205s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-538586                           | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-538586 ssh -- ls                    | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-536552                           | mount-start-1-536552 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-538586 ssh -- ls                    | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-538586                           | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	| start   | -p mount-start-2-538586                           | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	| ssh     | mount-start-2-538586 ssh -- ls                    | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-538586                           | mount-start-2-538586 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	| delete  | -p mount-start-1-536552                           | mount-start-1-536552 | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:52 UTC |
	| start   | -p multinode-174950                               | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:52 UTC | 14 Sep 23 22:54 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- apply -f                   | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- rollout                    | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- get pods -o                | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- get pods -o                | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-fkf4t --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-grlb8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-fkf4t --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-grlb8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-fkf4t -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-grlb8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- get pods -o                | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-fkf4t                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC |                     |
	|         | busybox-5bc68d56bd-fkf4t -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC | 14 Sep 23 22:54 UTC |
	|         | busybox-5bc68d56bd-grlb8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-174950 -- exec                       | multinode-174950     | jenkins | v1.31.2 | 14 Sep 23 22:54 UTC |                     |
	|         | busybox-5bc68d56bd-grlb8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:52:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:52:58.231515 2909621 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:52:58.231691 2909621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:52:58.231704 2909621 out.go:309] Setting ErrFile to fd 2...
	I0914 22:52:58.231721 2909621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:52:58.231992 2909621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:52:58.232455 2909621 out.go:303] Setting JSON to false
	I0914 22:52:58.233483 2909621 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":81323,"bootTime":1694650655,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:52:58.233551 2909621 start.go:138] virtualization:  
	I0914 22:52:58.236261 2909621 out.go:177] * [multinode-174950] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:52:58.238936 2909621 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:52:58.240977 2909621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:52:58.239107 2909621 notify.go:220] Checking for updates...
	I0914 22:52:58.244846 2909621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:52:58.246808 2909621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:52:58.248671 2909621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:52:58.250828 2909621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:52:58.252988 2909621 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:52:58.280305 2909621 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:52:58.280402 2909621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:52:58.368143 2909621 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-14 22:52:58.358605858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:52:58.368241 2909621 docker.go:294] overlay module found
	I0914 22:52:58.370335 2909621 out.go:177] * Using the docker driver based on user configuration
	I0914 22:52:58.372146 2909621 start.go:298] selected driver: docker
	I0914 22:52:58.372160 2909621 start.go:902] validating driver "docker" against <nil>
	I0914 22:52:58.372173 2909621 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:52:58.372842 2909621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:52:58.438914 2909621 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-14 22:52:58.429462876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:52:58.439071 2909621 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 22:52:58.439289 2909621 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 22:52:58.441296 2909621 out.go:177] * Using Docker driver with root privileges
	I0914 22:52:58.443374 2909621 cni.go:84] Creating CNI manager for ""
	I0914 22:52:58.443389 2909621 cni.go:136] 0 nodes found, recommending kindnet
	I0914 22:52:58.443398 2909621 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 22:52:58.443412 2909621 start_flags.go:321] config:
	{Name:multinode-174950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:52:58.445693 2909621 out.go:177] * Starting control plane node multinode-174950 in cluster multinode-174950
	I0914 22:52:58.447323 2909621 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:52:58.449204 2909621 out.go:177] * Pulling base image ...
	I0914 22:52:58.451014 2909621 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:52:58.451058 2909621 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 22:52:58.451076 2909621 cache.go:57] Caching tarball of preloaded images
	I0914 22:52:58.451092 2909621 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:52:58.451159 2909621 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 22:52:58.451170 2909621 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:52:58.451494 2909621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json ...
	I0914 22:52:58.451523 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json: {Name:mk09456d5afc52319f6c33d8f6ea914f219396b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:52:58.468129 2909621 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 22:52:58.468166 2909621 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 22:52:58.468186 2909621 cache.go:195] Successfully downloaded all kic artifacts
	I0914 22:52:58.468213 2909621 start.go:365] acquiring machines lock for multinode-174950: {Name:mk06454f6395b18bc01f1f858b180ce42d92fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:52:58.468423 2909621 start.go:369] acquired machines lock for "multinode-174950" in 138.592µs
	I0914 22:52:58.468454 2909621 start.go:93] Provisioning new machine with config: &{Name:multinode-174950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:52:58.468629 2909621 start.go:125] createHost starting for "" (driver="docker")
	I0914 22:52:58.471244 2909621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0914 22:52:58.471478 2909621 start.go:159] libmachine.API.Create for "multinode-174950" (driver="docker")
	I0914 22:52:58.471502 2909621 client.go:168] LocalClient.Create starting
	I0914 22:52:58.471579 2909621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 22:52:58.471622 2909621 main.go:141] libmachine: Decoding PEM data...
	I0914 22:52:58.471641 2909621 main.go:141] libmachine: Parsing certificate...
	I0914 22:52:58.471696 2909621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 22:52:58.471721 2909621 main.go:141] libmachine: Decoding PEM data...
	I0914 22:52:58.471732 2909621 main.go:141] libmachine: Parsing certificate...
	I0914 22:52:58.472084 2909621 cli_runner.go:164] Run: docker network inspect multinode-174950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 22:52:58.488686 2909621 cli_runner.go:211] docker network inspect multinode-174950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 22:52:58.488763 2909621 network_create.go:281] running [docker network inspect multinode-174950] to gather additional debugging logs...
	I0914 22:52:58.488783 2909621 cli_runner.go:164] Run: docker network inspect multinode-174950
	W0914 22:52:58.504957 2909621 cli_runner.go:211] docker network inspect multinode-174950 returned with exit code 1
	I0914 22:52:58.504987 2909621 network_create.go:284] error running [docker network inspect multinode-174950]: docker network inspect multinode-174950: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-174950 not found
	I0914 22:52:58.505010 2909621 network_create.go:286] output of [docker network inspect multinode-174950]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-174950 not found
	
	** /stderr **
	I0914 22:52:58.505072 2909621 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:52:58.521962 2909621 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1af2c56fe484 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bc:82:c7:51} reservation:<nil>}
	I0914 22:52:58.522330 2909621 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000bfaa30}
	I0914 22:52:58.522352 2909621 network_create.go:123] attempt to create docker network multinode-174950 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0914 22:52:58.522407 2909621 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-174950 multinode-174950
	I0914 22:52:58.588468 2909621 network_create.go:107] docker network multinode-174950 192.168.58.0/24 created
	I0914 22:52:58.588688 2909621 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-174950" container
	I0914 22:52:58.588778 2909621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 22:52:58.605108 2909621 cli_runner.go:164] Run: docker volume create multinode-174950 --label name.minikube.sigs.k8s.io=multinode-174950 --label created_by.minikube.sigs.k8s.io=true
	I0914 22:52:58.622601 2909621 oci.go:103] Successfully created a docker volume multinode-174950
	I0914 22:52:58.622688 2909621 cli_runner.go:164] Run: docker run --rm --name multinode-174950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-174950 --entrypoint /usr/bin/test -v multinode-174950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib
	I0914 22:52:59.194800 2909621 oci.go:107] Successfully prepared a docker volume multinode-174950
	I0914 22:52:59.194838 2909621 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:52:59.194859 2909621 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 22:52:59.194943 2909621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-174950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 22:53:03.405550 2909621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-174950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir: (4.210564781s)
	I0914 22:53:03.405584 2909621 kic.go:199] duration metric: took 4.210722 seconds to extract preloaded images to volume
	W0914 22:53:03.405723 2909621 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 22:53:03.405831 2909621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 22:53:03.470803 2909621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-174950 --name multinode-174950 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-174950 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-174950 --network multinode-174950 --ip 192.168.58.2 --volume multinode-174950:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:53:03.804284 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Running}}
	I0914 22:53:03.830741 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:03.859338 2909621 cli_runner.go:164] Run: docker exec multinode-174950 stat /var/lib/dpkg/alternatives/iptables
	I0914 22:53:03.922426 2909621 oci.go:144] the created container "multinode-174950" has a running status.
	I0914 22:53:03.922457 2909621 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa...
	I0914 22:53:05.027822 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0914 22:53:05.027921 2909621 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 22:53:05.050456 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:05.070768 2909621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 22:53:05.070787 2909621 kic_runner.go:114] Args: [docker exec --privileged multinode-174950 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 22:53:05.130108 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:05.151519 2909621 machine.go:88] provisioning docker machine ...
	I0914 22:53:05.151549 2909621 ubuntu.go:169] provisioning hostname "multinode-174950"
	I0914 22:53:05.151622 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:05.169363 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:05.169841 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36463 <nil> <nil>}
	I0914 22:53:05.169859 2909621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-174950 && echo "multinode-174950" | sudo tee /etc/hostname
	I0914 22:53:05.330474 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-174950
	
	I0914 22:53:05.330556 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:05.349033 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:05.349444 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36463 <nil> <nil>}
	I0914 22:53:05.349468 2909621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-174950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-174950/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-174950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:53:05.498068 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:53:05.498097 2909621 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 22:53:05.498130 2909621 ubuntu.go:177] setting up certificates
	I0914 22:53:05.498144 2909621 provision.go:83] configureAuth start
	I0914 22:53:05.498211 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950
	I0914 22:53:05.517505 2909621 provision.go:138] copyHostCerts
	I0914 22:53:05.517544 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:53:05.517573 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 22:53:05.517583 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:53:05.517658 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 22:53:05.517741 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:53:05.517762 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 22:53:05.517767 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:53:05.517804 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 22:53:05.517851 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:53:05.517870 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 22:53:05.517877 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:53:05.517902 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 22:53:05.517950 2909621 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.multinode-174950 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-174950]
	I0914 22:53:05.779922 2909621 provision.go:172] copyRemoteCerts
	I0914 22:53:05.779998 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:53:05.780040 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:05.799598 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:05.902948 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:53:05.903017 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:53:05.932209 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:53:05.932287 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 22:53:05.960078 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:53:05.960140 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:53:05.988763 2909621 provision.go:86] duration metric: configureAuth took 490.601648ms
	I0914 22:53:05.988836 2909621 ubuntu.go:193] setting minikube options for container-runtime
	I0914 22:53:05.989077 2909621 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:53:05.989218 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:06.007642 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:06.008066 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36463 <nil> <nil>}
	I0914 22:53:06.008091 2909621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:53:06.258419 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:53:06.258442 2909621 machine.go:91] provisioned docker machine in 1.106903007s
	I0914 22:53:06.258452 2909621 client.go:171] LocalClient.Create took 7.786944792s
	I0914 22:53:06.258493 2909621 start.go:167] duration metric: libmachine.API.Create for "multinode-174950" took 7.787015298s
	I0914 22:53:06.258502 2909621 start.go:300] post-start starting for "multinode-174950" (driver="docker")
	I0914 22:53:06.258512 2909621 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:53:06.258599 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:53:06.258660 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:06.277273 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:06.379237 2909621 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:53:06.383046 2909621 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0914 22:53:06.383068 2909621 command_runner.go:130] > NAME="Ubuntu"
	I0914 22:53:06.383075 2909621 command_runner.go:130] > VERSION_ID="22.04"
	I0914 22:53:06.383082 2909621 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0914 22:53:06.383090 2909621 command_runner.go:130] > VERSION_CODENAME=jammy
	I0914 22:53:06.383095 2909621 command_runner.go:130] > ID=ubuntu
	I0914 22:53:06.383101 2909621 command_runner.go:130] > ID_LIKE=debian
	I0914 22:53:06.383108 2909621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0914 22:53:06.383114 2909621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0914 22:53:06.383128 2909621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0914 22:53:06.383137 2909621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0914 22:53:06.383147 2909621 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0914 22:53:06.383244 2909621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 22:53:06.383274 2909621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 22:53:06.383290 2909621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 22:53:06.383297 2909621 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 22:53:06.383307 2909621 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 22:53:06.383368 2909621 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 22:53:06.383450 2909621 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 22:53:06.383462 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /etc/ssl/certs/28461092.pem
	I0914 22:53:06.383570 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:53:06.393891 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:53:06.421649 2909621 start.go:303] post-start completed in 163.131723ms
	I0914 22:53:06.422043 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950
	I0914 22:53:06.440158 2909621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json ...
	I0914 22:53:06.440431 2909621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:53:06.440484 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:06.458049 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:06.554319 2909621 command_runner.go:130] > 11%!
	(MISSING)I0914 22:53:06.554395 2909621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 22:53:06.559484 2909621 command_runner.go:130] > 174G
	I0914 22:53:06.559892 2909621 start.go:128] duration metric: createHost completed in 8.091251144s
	I0914 22:53:06.559909 2909621 start.go:83] releasing machines lock for "multinode-174950", held for 8.091473693s
	I0914 22:53:06.559982 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950
	I0914 22:53:06.579560 2909621 ssh_runner.go:195] Run: cat /version.json
	I0914 22:53:06.579614 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:06.579852 2909621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:53:06.579906 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:06.603677 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:06.616456 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:06.704733 2909621 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694625416-17243", "minikube_version": "v1.31.2", "commit": "b8afb9b4a853f4e7882dbdfb53995784a48fcea7"}
	I0914 22:53:06.704864 2909621 ssh_runner.go:195] Run: systemctl --version
	I0914 22:53:06.847835 2909621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:53:06.847887 2909621 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I0914 22:53:06.847926 2909621 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0914 22:53:06.848002 2909621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:53:06.993604 2909621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:53:06.998605 2909621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0914 22:53:06.998629 2909621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0914 22:53:06.998640 2909621 command_runner.go:130] > Device: 3ah/58d	Inode: 2089567     Links: 1
	I0914 22:53:06.998649 2909621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:53:06.998656 2909621 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0914 22:53:06.998663 2909621 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0914 22:53:06.998672 2909621 command_runner.go:130] > Change: 2023-09-14 22:27:04.470483202 +0000
	I0914 22:53:06.998679 2909621 command_runner.go:130] >  Birth: 2023-09-14 22:27:04.470483202 +0000
	I0914 22:53:06.999096 2909621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:53:07.021222 2909621 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 22:53:07.021296 2909621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:53:07.062321 2909621 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0914 22:53:07.062353 2909621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 22:53:07.062361 2909621 start.go:469] detecting cgroup driver to use...
	I0914 22:53:07.062389 2909621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 22:53:07.062439 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:53:07.080241 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:53:07.093451 2909621 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:53:07.093555 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:53:07.109400 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:53:07.125916 2909621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:53:07.216439 2909621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:53:07.232714 2909621 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0914 22:53:07.325356 2909621 docker.go:212] disabling docker service ...
	I0914 22:53:07.325476 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:53:07.346962 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:53:07.360659 2909621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:53:07.453323 2909621 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0914 22:53:07.453472 2909621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:53:07.561230 2909621 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0914 22:53:07.561579 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:53:07.575180 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:53:07.593749 2909621 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:53:07.595191 2909621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:53:07.595254 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:07.606684 2909621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:53:07.606752 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:07.618263 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:07.629940 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:07.641683 2909621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:53:07.652741 2909621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:53:07.661722 2909621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 22:53:07.662835 2909621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:53:07.673241 2909621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:53:07.775765 2909621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:53:07.893184 2909621 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:53:07.893252 2909621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:53:07.897584 2909621 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:53:07.897603 2909621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:53:07.897611 2909621 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0914 22:53:07.897620 2909621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:53:07.897626 2909621 command_runner.go:130] > Access: 2023-09-14 22:53:07.877362213 +0000
	I0914 22:53:07.897634 2909621 command_runner.go:130] > Modify: 2023-09-14 22:53:07.877362213 +0000
	I0914 22:53:07.897644 2909621 command_runner.go:130] > Change: 2023-09-14 22:53:07.877362213 +0000
	I0914 22:53:07.897657 2909621 command_runner.go:130] >  Birth: -
	I0914 22:53:07.897873 2909621 start.go:537] Will wait 60s for crictl version
	I0914 22:53:07.897931 2909621 ssh_runner.go:195] Run: which crictl
	I0914 22:53:07.901775 2909621 command_runner.go:130] > /usr/bin/crictl
	I0914 22:53:07.902153 2909621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:53:07.942556 2909621 command_runner.go:130] > Version:  0.1.0
	I0914 22:53:07.942795 2909621 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:53:07.942956 2909621 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0914 22:53:07.943121 2909621 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 22:53:07.945738 2909621 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 22:53:07.945817 2909621 ssh_runner.go:195] Run: crio --version
	I0914 22:53:07.986746 2909621 command_runner.go:130] > crio version 1.24.6
	I0914 22:53:07.986768 2909621 command_runner.go:130] > Version:          1.24.6
	I0914 22:53:07.986777 2909621 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0914 22:53:07.986782 2909621 command_runner.go:130] > GitTreeState:     clean
	I0914 22:53:07.986790 2909621 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0914 22:53:07.986795 2909621 command_runner.go:130] > GoVersion:        go1.18.2
	I0914 22:53:07.986800 2909621 command_runner.go:130] > Compiler:         gc
	I0914 22:53:07.986807 2909621 command_runner.go:130] > Platform:         linux/arm64
	I0914 22:53:07.986816 2909621 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:53:07.986825 2909621 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:53:07.986834 2909621 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:53:07.986839 2909621 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:53:07.988913 2909621 ssh_runner.go:195] Run: crio --version
	I0914 22:53:08.029404 2909621 command_runner.go:130] > crio version 1.24.6
	I0914 22:53:08.029426 2909621 command_runner.go:130] > Version:          1.24.6
	I0914 22:53:08.029435 2909621 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0914 22:53:08.029440 2909621 command_runner.go:130] > GitTreeState:     clean
	I0914 22:53:08.029447 2909621 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0914 22:53:08.029453 2909621 command_runner.go:130] > GoVersion:        go1.18.2
	I0914 22:53:08.029458 2909621 command_runner.go:130] > Compiler:         gc
	I0914 22:53:08.029463 2909621 command_runner.go:130] > Platform:         linux/arm64
	I0914 22:53:08.029469 2909621 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:53:08.029481 2909621 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:53:08.029490 2909621 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:53:08.029495 2909621 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:53:08.034855 2909621 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0914 22:53:08.036874 2909621 cli_runner.go:164] Run: docker network inspect multinode-174950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:53:08.054385 2909621 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0914 22:53:08.058960 2909621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:53:08.071811 2909621 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:53:08.071906 2909621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:53:08.138778 2909621 command_runner.go:130] > {
	I0914 22:53:08.138800 2909621 command_runner.go:130] >   "images": [
	I0914 22:53:08.138805 2909621 command_runner.go:130] >     {
	I0914 22:53:08.138815 2909621 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0914 22:53:08.138821 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.138829 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0914 22:53:08.138834 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.138839 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.138853 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0914 22:53:08.138863 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0914 22:53:08.138867 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.138873 2909621 command_runner.go:130] >       "size": "60881430",
	I0914 22:53:08.138878 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.138883 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.138891 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.138896 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.138900 2909621 command_runner.go:130] >     },
	I0914 22:53:08.138905 2909621 command_runner.go:130] >     {
	I0914 22:53:08.138913 2909621 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0914 22:53:08.138917 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.138924 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 22:53:08.138928 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.138933 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.138943 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0914 22:53:08.138953 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0914 22:53:08.138957 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.138964 2909621 command_runner.go:130] >       "size": "29037500",
	I0914 22:53:08.138969 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.138975 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.138980 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.138985 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.138990 2909621 command_runner.go:130] >     },
	I0914 22:53:08.138994 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139002 2909621 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0914 22:53:08.139007 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139014 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0914 22:53:08.139019 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139024 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139033 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0914 22:53:08.139043 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0914 22:53:08.139047 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139053 2909621 command_runner.go:130] >       "size": "51393451",
	I0914 22:53:08.139058 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.139062 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139067 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139077 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139081 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139086 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139094 2909621 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0914 22:53:08.139099 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139105 2909621 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0914 22:53:08.139110 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139115 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139124 2909621 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0914 22:53:08.139133 2909621 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0914 22:53:08.139140 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139146 2909621 command_runner.go:130] >       "size": "182203183",
	I0914 22:53:08.139151 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.139155 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.139160 2909621 command_runner.go:130] >       },
	I0914 22:53:08.139166 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139171 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139176 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139181 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139185 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139193 2909621 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0914 22:53:08.139198 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139204 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0914 22:53:08.139209 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139215 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139225 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0914 22:53:08.139235 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0914 22:53:08.139240 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139245 2909621 command_runner.go:130] >       "size": "120857550",
	I0914 22:53:08.139250 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.139255 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.139259 2909621 command_runner.go:130] >       },
	I0914 22:53:08.139264 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139269 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139274 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139279 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139283 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139291 2909621 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0914 22:53:08.139296 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139302 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0914 22:53:08.139306 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139311 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139321 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0914 22:53:08.139330 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0914 22:53:08.139335 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139341 2909621 command_runner.go:130] >       "size": "117187378",
	I0914 22:53:08.139346 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.139351 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.139355 2909621 command_runner.go:130] >       },
	I0914 22:53:08.139360 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139365 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139370 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139374 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139378 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139386 2909621 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0914 22:53:08.139391 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139398 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0914 22:53:08.139402 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139407 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139416 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0914 22:53:08.139425 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0914 22:53:08.139430 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139439 2909621 command_runner.go:130] >       "size": "69926807",
	I0914 22:53:08.139444 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.139449 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139454 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139458 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139463 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139467 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139475 2909621 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0914 22:53:08.139480 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139486 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0914 22:53:08.139491 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139496 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139511 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0914 22:53:08.139521 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0914 22:53:08.139525 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139530 2909621 command_runner.go:130] >       "size": "59188020",
	I0914 22:53:08.139534 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.139540 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.139544 2909621 command_runner.go:130] >       },
	I0914 22:53:08.139550 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139554 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139559 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139564 2909621 command_runner.go:130] >     },
	I0914 22:53:08.139568 2909621 command_runner.go:130] >     {
	I0914 22:53:08.139576 2909621 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0914 22:53:08.139581 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.139589 2909621 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0914 22:53:08.139594 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139599 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.139609 2909621 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0914 22:53:08.139619 2909621 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0914 22:53:08.139623 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.139629 2909621 command_runner.go:130] >       "size": "520014",
	I0914 22:53:08.139633 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.139638 2909621 command_runner.go:130] >         "value": "65535"
	I0914 22:53:08.139643 2909621 command_runner.go:130] >       },
	I0914 22:53:08.139648 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.139653 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.139658 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.139662 2909621 command_runner.go:130] >     }
	I0914 22:53:08.139667 2909621 command_runner.go:130] >   ]
	I0914 22:53:08.139671 2909621 command_runner.go:130] > }
	I0914 22:53:08.139864 2909621 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:53:08.139872 2909621 crio.go:415] Images already preloaded, skipping extraction
	I0914 22:53:08.139924 2909621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 22:53:08.176201 2909621 command_runner.go:130] > {
	I0914 22:53:08.176218 2909621 command_runner.go:130] >   "images": [
	I0914 22:53:08.176224 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176233 2909621 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0914 22:53:08.176239 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176246 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0914 22:53:08.176251 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176256 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176267 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0914 22:53:08.176276 2909621 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0914 22:53:08.176280 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176286 2909621 command_runner.go:130] >       "size": "60881430",
	I0914 22:53:08.176291 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.176296 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176307 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176312 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176317 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176321 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176329 2909621 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0914 22:53:08.176333 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176340 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 22:53:08.176344 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176349 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176359 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0914 22:53:08.176368 2909621 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0914 22:53:08.176374 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176381 2909621 command_runner.go:130] >       "size": "29037500",
	I0914 22:53:08.176386 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.176391 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176396 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176401 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176407 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176412 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176419 2909621 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0914 22:53:08.176424 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176431 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0914 22:53:08.176435 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176440 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176450 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0914 22:53:08.176459 2909621 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0914 22:53:08.176464 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176469 2909621 command_runner.go:130] >       "size": "51393451",
	I0914 22:53:08.176474 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.176479 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176484 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176512 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176518 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176522 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176530 2909621 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0914 22:53:08.176535 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176541 2909621 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0914 22:53:08.176545 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176550 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176561 2909621 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0914 22:53:08.176570 2909621 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0914 22:53:08.176577 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176582 2909621 command_runner.go:130] >       "size": "182203183",
	I0914 22:53:08.176587 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.176592 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.176596 2909621 command_runner.go:130] >       },
	I0914 22:53:08.176601 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176607 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176612 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176616 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176621 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176628 2909621 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0914 22:53:08.176633 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176639 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0914 22:53:08.176644 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176650 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176660 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0914 22:53:08.176669 2909621 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0914 22:53:08.176674 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176679 2909621 command_runner.go:130] >       "size": "120857550",
	I0914 22:53:08.176683 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.176688 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.176693 2909621 command_runner.go:130] >       },
	I0914 22:53:08.176698 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176703 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176708 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176712 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176717 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176724 2909621 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0914 22:53:08.176729 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176736 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0914 22:53:08.176740 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176745 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176755 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0914 22:53:08.176765 2909621 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0914 22:53:08.176769 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176776 2909621 command_runner.go:130] >       "size": "117187378",
	I0914 22:53:08.176781 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.176786 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.176790 2909621 command_runner.go:130] >       },
	I0914 22:53:08.176795 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176800 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176805 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176810 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176815 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176822 2909621 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0914 22:53:08.176827 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176833 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0914 22:53:08.176838 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176843 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176852 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0914 22:53:08.176861 2909621 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0914 22:53:08.176866 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176871 2909621 command_runner.go:130] >       "size": "69926807",
	I0914 22:53:08.176875 2909621 command_runner.go:130] >       "uid": null,
	I0914 22:53:08.176880 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.176885 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.176890 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.176894 2909621 command_runner.go:130] >     },
	I0914 22:53:08.176898 2909621 command_runner.go:130] >     {
	I0914 22:53:08.176906 2909621 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0914 22:53:08.176911 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.176917 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0914 22:53:08.176921 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176926 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.176959 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0914 22:53:08.176969 2909621 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0914 22:53:08.176973 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.176978 2909621 command_runner.go:130] >       "size": "59188020",
	I0914 22:53:08.176984 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.176988 2909621 command_runner.go:130] >         "value": "0"
	I0914 22:53:08.176993 2909621 command_runner.go:130] >       },
	I0914 22:53:08.176998 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.177003 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.177008 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.177013 2909621 command_runner.go:130] >     },
	I0914 22:53:08.177017 2909621 command_runner.go:130] >     {
	I0914 22:53:08.177025 2909621 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0914 22:53:08.177030 2909621 command_runner.go:130] >       "repoTags": [
	I0914 22:53:08.177036 2909621 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0914 22:53:08.177040 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.177045 2909621 command_runner.go:130] >       "repoDigests": [
	I0914 22:53:08.177054 2909621 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0914 22:53:08.177063 2909621 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0914 22:53:08.177068 2909621 command_runner.go:130] >       ],
	I0914 22:53:08.177073 2909621 command_runner.go:130] >       "size": "520014",
	I0914 22:53:08.177077 2909621 command_runner.go:130] >       "uid": {
	I0914 22:53:08.177082 2909621 command_runner.go:130] >         "value": "65535"
	I0914 22:53:08.177087 2909621 command_runner.go:130] >       },
	I0914 22:53:08.177091 2909621 command_runner.go:130] >       "username": "",
	I0914 22:53:08.177096 2909621 command_runner.go:130] >       "spec": null,
	I0914 22:53:08.177101 2909621 command_runner.go:130] >       "pinned": false
	I0914 22:53:08.177106 2909621 command_runner.go:130] >     }
	I0914 22:53:08.177110 2909621 command_runner.go:130] >   ]
	I0914 22:53:08.177114 2909621 command_runner.go:130] > }
	I0914 22:53:08.179276 2909621 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 22:53:08.179296 2909621 cache_images.go:84] Images are preloaded, skipping loading
	I0914 22:53:08.179368 2909621 ssh_runner.go:195] Run: crio config
	I0914 22:53:08.228424 2909621 command_runner.go:130] ! time="2023-09-14 22:53:08.228032963Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0914 22:53:08.228518 2909621 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:53:08.235251 2909621 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:53:08.235272 2909621 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:53:08.235280 2909621 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:53:08.235284 2909621 command_runner.go:130] > #
	I0914 22:53:08.235292 2909621 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:53:08.235300 2909621 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:53:08.235308 2909621 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:53:08.235317 2909621 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:53:08.235322 2909621 command_runner.go:130] > # reload'.
	I0914 22:53:08.235329 2909621 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:53:08.235337 2909621 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:53:08.235344 2909621 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:53:08.235352 2909621 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:53:08.235356 2909621 command_runner.go:130] > [crio]
	I0914 22:53:08.235363 2909621 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:53:08.235370 2909621 command_runner.go:130] > # containers images, in this directory.
	I0914 22:53:08.235378 2909621 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0914 22:53:08.235388 2909621 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:53:08.235394 2909621 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0914 22:53:08.235402 2909621 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:53:08.235410 2909621 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:53:08.235415 2909621 command_runner.go:130] > # storage_driver = "vfs"
	I0914 22:53:08.235422 2909621 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:53:08.235429 2909621 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:53:08.235434 2909621 command_runner.go:130] > # storage_option = [
	I0914 22:53:08.235438 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.235446 2909621 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:53:08.235453 2909621 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:53:08.235459 2909621 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:53:08.235466 2909621 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:53:08.235474 2909621 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:53:08.235479 2909621 command_runner.go:130] > # always happen on a node reboot
	I0914 22:53:08.235485 2909621 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:53:08.235492 2909621 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:53:08.235500 2909621 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:53:08.235508 2909621 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:53:08.235514 2909621 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:53:08.235528 2909621 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:53:08.235537 2909621 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:53:08.235542 2909621 command_runner.go:130] > # internal_wipe = true
	I0914 22:53:08.235549 2909621 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:53:08.235556 2909621 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:53:08.235563 2909621 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:53:08.235570 2909621 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:53:08.235577 2909621 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:53:08.235582 2909621 command_runner.go:130] > [crio.api]
	I0914 22:53:08.235590 2909621 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:53:08.235596 2909621 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:53:08.235602 2909621 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:53:08.235607 2909621 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:53:08.235615 2909621 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:53:08.235621 2909621 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:53:08.235627 2909621 command_runner.go:130] > # stream_port = "0"
	I0914 22:53:08.235634 2909621 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:53:08.235639 2909621 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:53:08.235646 2909621 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:53:08.235651 2909621 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:53:08.235659 2909621 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:53:08.235666 2909621 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:53:08.235671 2909621 command_runner.go:130] > # minutes.
	I0914 22:53:08.235676 2909621 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:53:08.235683 2909621 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:53:08.235691 2909621 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:53:08.235696 2909621 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:53:08.235703 2909621 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:53:08.235711 2909621 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:53:08.235718 2909621 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:53:08.235723 2909621 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:53:08.235732 2909621 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:53:08.235738 2909621 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0914 22:53:08.235746 2909621 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:53:08.235752 2909621 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0914 22:53:08.235789 2909621 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:53:08.235798 2909621 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:53:08.235803 2909621 command_runner.go:130] > [crio.runtime]
	I0914 22:53:08.235810 2909621 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:53:08.235817 2909621 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:53:08.235822 2909621 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:53:08.235829 2909621 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:53:08.235834 2909621 command_runner.go:130] > # default_ulimits = [
	I0914 22:53:08.235839 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.235846 2909621 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:53:08.235851 2909621 command_runner.go:130] > # no_pivot = false
	I0914 22:53:08.235858 2909621 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:53:08.235867 2909621 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:53:08.235878 2909621 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:53:08.235885 2909621 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:53:08.235891 2909621 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:53:08.235899 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:53:08.235904 2909621 command_runner.go:130] > # conmon = ""
	I0914 22:53:08.235910 2909621 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:53:08.235918 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:53:08.235923 2909621 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:53:08.235931 2909621 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:53:08.235937 2909621 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:53:08.235945 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:53:08.235950 2909621 command_runner.go:130] > # conmon_env = [
	I0914 22:53:08.235954 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.235961 2909621 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:53:08.235967 2909621 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:53:08.235974 2909621 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:53:08.235979 2909621 command_runner.go:130] > # default_env = [
	I0914 22:53:08.235983 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.235990 2909621 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:53:08.235995 2909621 command_runner.go:130] > # selinux = false
	I0914 22:53:08.236002 2909621 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:53:08.236010 2909621 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:53:08.236017 2909621 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:53:08.236022 2909621 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:53:08.236029 2909621 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:53:08.236036 2909621 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:53:08.236044 2909621 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:53:08.236051 2909621 command_runner.go:130] > # which might increase security.
	I0914 22:53:08.236057 2909621 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0914 22:53:08.236064 2909621 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:53:08.236072 2909621 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:53:08.236080 2909621 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:53:08.236087 2909621 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:53:08.236094 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:08.236099 2909621 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:53:08.236106 2909621 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:53:08.236112 2909621 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:53:08.236117 2909621 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:53:08.236125 2909621 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:53:08.236130 2909621 command_runner.go:130] > # irqbalance daemon.
	I0914 22:53:08.236137 2909621 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:53:08.236145 2909621 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:53:08.236151 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:08.236156 2909621 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:53:08.236162 2909621 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:53:08.236168 2909621 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:53:08.236175 2909621 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:53:08.236180 2909621 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:53:08.236188 2909621 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:53:08.236196 2909621 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:53:08.236201 2909621 command_runner.go:130] > # will be added.
	I0914 22:53:08.236206 2909621 command_runner.go:130] > # default_capabilities = [
	I0914 22:53:08.236211 2909621 command_runner.go:130] > # 	"CHOWN",
	I0914 22:53:08.236216 2909621 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:53:08.236220 2909621 command_runner.go:130] > # 	"FSETID",
	I0914 22:53:08.236225 2909621 command_runner.go:130] > # 	"FOWNER",
	I0914 22:53:08.236230 2909621 command_runner.go:130] > # 	"SETGID",
	I0914 22:53:08.236234 2909621 command_runner.go:130] > # 	"SETUID",
	I0914 22:53:08.236239 2909621 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:53:08.236244 2909621 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:53:08.236249 2909621 command_runner.go:130] > # 	"KILL",
	I0914 22:53:08.236253 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236262 2909621 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 22:53:08.236270 2909621 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 22:53:08.236276 2909621 command_runner.go:130] > # add_inheritable_capabilities = true
	I0914 22:53:08.236284 2909621 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:53:08.236291 2909621 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:53:08.236297 2909621 command_runner.go:130] > # default_sysctls = [
	I0914 22:53:08.236301 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236307 2909621 command_runner.go:130] > # List of devices on the host that a
	I0914 22:53:08.236314 2909621 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:53:08.236319 2909621 command_runner.go:130] > # allowed_devices = [
	I0914 22:53:08.236324 2909621 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:53:08.236328 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236334 2909621 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:53:08.236369 2909621 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:53:08.236377 2909621 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:53:08.236384 2909621 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:53:08.236389 2909621 command_runner.go:130] > # additional_devices = [
	I0914 22:53:08.236393 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236399 2909621 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:53:08.236404 2909621 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:53:08.236409 2909621 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:53:08.236414 2909621 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:53:08.236418 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236425 2909621 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:53:08.236433 2909621 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:53:08.236438 2909621 command_runner.go:130] > # Defaults to false.
	I0914 22:53:08.236444 2909621 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:53:08.236452 2909621 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:53:08.236460 2909621 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:53:08.236464 2909621 command_runner.go:130] > # hooks_dir = [
	I0914 22:53:08.236470 2909621 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:53:08.236474 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.236482 2909621 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:53:08.236489 2909621 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:53:08.236511 2909621 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:53:08.236515 2909621 command_runner.go:130] > #
	I0914 22:53:08.236523 2909621 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:53:08.236530 2909621 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:53:08.236537 2909621 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:53:08.236541 2909621 command_runner.go:130] > #
	I0914 22:53:08.236548 2909621 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:53:08.236556 2909621 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:53:08.236564 2909621 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:53:08.236570 2909621 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:53:08.236574 2909621 command_runner.go:130] > #
	I0914 22:53:08.236580 2909621 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:53:08.236587 2909621 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:53:08.236595 2909621 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:53:08.236600 2909621 command_runner.go:130] > # pids_limit = 0
	I0914 22:53:08.236607 2909621 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:53:08.236616 2909621 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:53:08.236624 2909621 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:53:08.236633 2909621 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:53:08.236638 2909621 command_runner.go:130] > # log_size_max = -1
	I0914 22:53:08.236646 2909621 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:53:08.236651 2909621 command_runner.go:130] > # log_to_journald = false
	I0914 22:53:08.236659 2909621 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:53:08.236665 2909621 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:53:08.236672 2909621 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:53:08.236678 2909621 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:53:08.236685 2909621 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:53:08.236690 2909621 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:53:08.236697 2909621 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:53:08.236702 2909621 command_runner.go:130] > # read_only = false
	I0914 22:53:08.236710 2909621 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:53:08.236717 2909621 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:53:08.236722 2909621 command_runner.go:130] > # live configuration reload.
	I0914 22:53:08.236727 2909621 command_runner.go:130] > # log_level = "info"
	I0914 22:53:08.236734 2909621 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:53:08.236741 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:08.236746 2909621 command_runner.go:130] > # log_filter = ""
	I0914 22:53:08.236754 2909621 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:53:08.236761 2909621 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:53:08.236767 2909621 command_runner.go:130] > # separated by comma.
	I0914 22:53:08.236772 2909621 command_runner.go:130] > # uid_mappings = ""
	I0914 22:53:08.236779 2909621 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:53:08.236786 2909621 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:53:08.236791 2909621 command_runner.go:130] > # separated by comma.
	I0914 22:53:08.236797 2909621 command_runner.go:130] > # gid_mappings = ""
	I0914 22:53:08.236804 2909621 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:53:08.236812 2909621 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:53:08.236820 2909621 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:53:08.236825 2909621 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:53:08.236833 2909621 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:53:08.236841 2909621 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:53:08.236848 2909621 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:53:08.236854 2909621 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:53:08.236862 2909621 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:53:08.236869 2909621 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:53:08.236876 2909621 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:53:08.236881 2909621 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:53:08.236888 2909621 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:53:08.236898 2909621 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:53:08.236904 2909621 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:53:08.236910 2909621 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:53:08.236915 2909621 command_runner.go:130] > # drop_infra_ctr = true
	I0914 22:53:08.236923 2909621 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:53:08.236930 2909621 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:53:08.236938 2909621 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:53:08.236944 2909621 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:53:08.236953 2909621 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:53:08.236959 2909621 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:53:08.236964 2909621 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:53:08.236973 2909621 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:53:08.236978 2909621 command_runner.go:130] > # pinns_path = ""
	I0914 22:53:08.236986 2909621 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:53:08.236993 2909621 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:53:08.237001 2909621 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:53:08.237007 2909621 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:53:08.237014 2909621 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:53:08.237023 2909621 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:53:08.237034 2909621 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:53:08.237040 2909621 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:53:08.237050 2909621 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:53:08.237056 2909621 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:53:08.237062 2909621 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:53:08.237066 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.237074 2909621 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:53:08.237082 2909621 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:53:08.237090 2909621 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:53:08.237098 2909621 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:53:08.237104 2909621 command_runner.go:130] > #
	I0914 22:53:08.237110 2909621 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:53:08.237116 2909621 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:53:08.237121 2909621 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:53:08.237127 2909621 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:53:08.237132 2909621 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:53:08.237138 2909621 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:53:08.237142 2909621 command_runner.go:130] > # Where:
	I0914 22:53:08.237148 2909621 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:53:08.237156 2909621 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:53:08.237164 2909621 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:53:08.237172 2909621 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:53:08.237176 2909621 command_runner.go:130] > #   in $PATH.
	I0914 22:53:08.237184 2909621 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:53:08.237190 2909621 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:53:08.237197 2909621 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:53:08.237202 2909621 command_runner.go:130] > #   state.
	I0914 22:53:08.237209 2909621 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:53:08.237217 2909621 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:53:08.237224 2909621 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:53:08.237231 2909621 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:53:08.237239 2909621 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:53:08.237247 2909621 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:53:08.237252 2909621 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:53:08.237260 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:53:08.237269 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:53:08.237276 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:53:08.237284 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:53:08.237293 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:53:08.237300 2909621 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:53:08.237308 2909621 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:53:08.237316 2909621 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:53:08.237324 2909621 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:53:08.237329 2909621 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:53:08.237335 2909621 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0914 22:53:08.237340 2909621 command_runner.go:130] > runtime_type = "oci"
	I0914 22:53:08.237347 2909621 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:53:08.237353 2909621 command_runner.go:130] > runtime_config_path = ""
	I0914 22:53:08.237358 2909621 command_runner.go:130] > monitor_path = ""
	I0914 22:53:08.237363 2909621 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:53:08.237368 2909621 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:53:08.237388 2909621 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:53:08.237393 2909621 command_runner.go:130] > # running containers
	I0914 22:53:08.237398 2909621 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:53:08.237406 2909621 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:53:08.237414 2909621 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:53:08.237421 2909621 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:53:08.237427 2909621 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:53:08.237433 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:53:08.237438 2909621 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:53:08.237444 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:53:08.237450 2909621 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:53:08.237456 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:53:08.237464 2909621 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:53:08.237470 2909621 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:53:08.237478 2909621 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:53:08.237487 2909621 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:53:08.237496 2909621 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:53:08.237504 2909621 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:53:08.237515 2909621 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:53:08.237525 2909621 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:53:08.237532 2909621 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:53:08.237540 2909621 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:53:08.237545 2909621 command_runner.go:130] > # Example:
	I0914 22:53:08.237551 2909621 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:53:08.237557 2909621 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:53:08.237563 2909621 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:53:08.237569 2909621 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:53:08.237575 2909621 command_runner.go:130] > # cpuset = 0
	I0914 22:53:08.237579 2909621 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:53:08.237584 2909621 command_runner.go:130] > # Where:
	I0914 22:53:08.237589 2909621 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:53:08.237599 2909621 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:53:08.237608 2909621 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:53:08.237616 2909621 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:53:08.237625 2909621 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:53:08.237632 2909621 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:53:08.237636 2909621 command_runner.go:130] > # 
	I0914 22:53:08.237644 2909621 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:53:08.237648 2909621 command_runner.go:130] > #
	I0914 22:53:08.237655 2909621 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:53:08.237663 2909621 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:53:08.237670 2909621 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:53:08.237679 2909621 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:53:08.237687 2909621 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:53:08.237691 2909621 command_runner.go:130] > [crio.image]
	I0914 22:53:08.237699 2909621 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:53:08.237704 2909621 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:53:08.237712 2909621 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:53:08.237719 2909621 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:53:08.237725 2909621 command_runner.go:130] > # global_auth_file = ""
	I0914 22:53:08.237731 2909621 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:53:08.237737 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:08.237743 2909621 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:53:08.237751 2909621 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:53:08.237758 2909621 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:53:08.237765 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:08.237770 2909621 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:53:08.237777 2909621 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:53:08.237784 2909621 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:53:08.237792 2909621 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:53:08.237799 2909621 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:53:08.237804 2909621 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:53:08.237812 2909621 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:53:08.237820 2909621 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:53:08.237828 2909621 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:53:08.237835 2909621 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:53:08.237842 2909621 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:53:08.237852 2909621 command_runner.go:130] > # signature_policy = ""
	I0914 22:53:08.237864 2909621 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:53:08.237872 2909621 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:53:08.237877 2909621 command_runner.go:130] > # changing them here.
	I0914 22:53:08.237882 2909621 command_runner.go:130] > # insecure_registries = [
	I0914 22:53:08.237886 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.237893 2909621 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:53:08.237900 2909621 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:53:08.237905 2909621 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:53:08.237911 2909621 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:53:08.237917 2909621 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:53:08.237924 2909621 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:53:08.237929 2909621 command_runner.go:130] > # CNI plugins.
	I0914 22:53:08.237933 2909621 command_runner.go:130] > [crio.network]
	I0914 22:53:08.237940 2909621 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:53:08.237947 2909621 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:53:08.237952 2909621 command_runner.go:130] > # cni_default_network = ""
	I0914 22:53:08.237959 2909621 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:53:08.237965 2909621 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:53:08.237972 2909621 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:53:08.237976 2909621 command_runner.go:130] > # plugin_dirs = [
	I0914 22:53:08.237981 2909621 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:53:08.237985 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.237993 2909621 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:53:08.237997 2909621 command_runner.go:130] > [crio.metrics]
	I0914 22:53:08.238003 2909621 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:53:08.238008 2909621 command_runner.go:130] > # enable_metrics = false
	I0914 22:53:08.238014 2909621 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:53:08.238019 2909621 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:53:08.238027 2909621 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:53:08.238034 2909621 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:53:08.238041 2909621 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:53:08.238047 2909621 command_runner.go:130] > # metrics_collectors = [
	I0914 22:53:08.238051 2909621 command_runner.go:130] > # 	"operations",
	I0914 22:53:08.238057 2909621 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:53:08.238063 2909621 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:53:08.238069 2909621 command_runner.go:130] > # 	"operations_errors",
	I0914 22:53:08.238074 2909621 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:53:08.238080 2909621 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:53:08.238085 2909621 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:53:08.238090 2909621 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:53:08.238095 2909621 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:53:08.238100 2909621 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:53:08.238105 2909621 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:53:08.238112 2909621 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:53:08.238117 2909621 command_runner.go:130] > # 	"containers_oom",
	I0914 22:53:08.238122 2909621 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:53:08.238128 2909621 command_runner.go:130] > # 	"operations_total",
	I0914 22:53:08.238133 2909621 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:53:08.238139 2909621 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:53:08.238144 2909621 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:53:08.238150 2909621 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:53:08.238155 2909621 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:53:08.238161 2909621 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:53:08.238166 2909621 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:53:08.238172 2909621 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:53:08.238177 2909621 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:53:08.238181 2909621 command_runner.go:130] > # ]
	I0914 22:53:08.238188 2909621 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:53:08.238193 2909621 command_runner.go:130] > # metrics_port = 9090
	I0914 22:53:08.238199 2909621 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:53:08.238204 2909621 command_runner.go:130] > # metrics_socket = ""
	I0914 22:53:08.238210 2909621 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:53:08.238217 2909621 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:53:08.238225 2909621 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:53:08.238231 2909621 command_runner.go:130] > # certificate on any modification event.
	I0914 22:53:08.238236 2909621 command_runner.go:130] > # metrics_cert = ""
	I0914 22:53:08.238242 2909621 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:53:08.238248 2909621 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:53:08.238253 2909621 command_runner.go:130] > # metrics_key = ""
	I0914 22:53:08.238260 2909621 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:53:08.238265 2909621 command_runner.go:130] > [crio.tracing]
	I0914 22:53:08.238272 2909621 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:53:08.238278 2909621 command_runner.go:130] > # enable_tracing = false
	I0914 22:53:08.238284 2909621 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:53:08.238290 2909621 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:53:08.238296 2909621 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:53:08.238302 2909621 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:53:08.238309 2909621 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:53:08.238314 2909621 command_runner.go:130] > [crio.stats]
	I0914 22:53:08.238321 2909621 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:53:08.238328 2909621 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:53:08.238333 2909621 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:53:08.238401 2909621 cni.go:84] Creating CNI manager for ""
	I0914 22:53:08.238409 2909621 cni.go:136] 1 nodes found, recommending kindnet
	I0914 22:53:08.238464 2909621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:53:08.238484 2909621 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-174950 NodeName:multinode-174950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:53:08.238669 2909621 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-174950"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:53:08.238735 2909621 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-174950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:53:08.238801 2909621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:53:08.248330 2909621 command_runner.go:130] > kubeadm
	I0914 22:53:08.248347 2909621 command_runner.go:130] > kubectl
	I0914 22:53:08.248352 2909621 command_runner.go:130] > kubelet
	I0914 22:53:08.249537 2909621 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:53:08.249626 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 22:53:08.259908 2909621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0914 22:53:08.280037 2909621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:53:08.300215 2909621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0914 22:53:08.320549 2909621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0914 22:53:08.324719 2909621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:53:08.337555 2909621 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950 for IP: 192.168.58.2
	I0914 22:53:08.337585 2909621 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:08.337715 2909621 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 22:53:08.337760 2909621 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 22:53:08.337812 2909621 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key
	I0914 22:53:08.337828 2909621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt with IP's: []
	I0914 22:53:08.853093 2909621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt ...
	I0914 22:53:08.853126 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt: {Name:mk867f0a20358aa7f61aaad862829cc1d7c06ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:08.853324 2909621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key ...
	I0914 22:53:08.853338 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key: {Name:mkc064299d65a8f902b32024ade53692a898a573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:08.853427 2909621 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key.cee25041
	I0914 22:53:08.853441 2909621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0914 22:53:10.058992 2909621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt.cee25041 ...
	I0914 22:53:10.059025 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt.cee25041: {Name:mkce4b0448cfe2db83330a8384394df7093c70e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:10.059232 2909621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key.cee25041 ...
	I0914 22:53:10.059252 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key.cee25041: {Name:mk8cda9f63bb1f498fadc318202f5f7788516465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:10.059342 2909621 certs.go:337] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt
	I0914 22:53:10.059418 2909621 certs.go:341] copying /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key
	I0914 22:53:10.059481 2909621 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.key
	I0914 22:53:10.059498 2909621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.crt with IP's: []
	I0914 22:53:10.701898 2909621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.crt ...
	I0914 22:53:10.701928 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.crt: {Name:mk61095f972d56866e98984183ddd9522ff06b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:10.702106 2909621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.key ...
	I0914 22:53:10.702117 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.key: {Name:mk951f97cec34ed623f6e919428734c845638d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:10.702189 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 22:53:10.702204 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 22:53:10.702216 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 22:53:10.702227 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 22:53:10.702238 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:53:10.702252 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:53:10.702270 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:53:10.702285 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:53:10.702347 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 22:53:10.702383 2909621 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 22:53:10.702392 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:53:10.702422 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:53:10.702449 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:53:10.702485 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 22:53:10.702529 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:53:10.702556 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem -> /usr/share/ca-certificates/2846109.pem
	I0914 22:53:10.702567 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /usr/share/ca-certificates/28461092.pem
	I0914 22:53:10.702579 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:10.703158 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 22:53:10.730745 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 22:53:10.758601 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 22:53:10.786746 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 22:53:10.814415 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:53:10.841474 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 22:53:10.868920 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:53:10.897317 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:53:10.925182 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 22:53:10.952720 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 22:53:10.980248 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:53:11.007270 2909621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 22:53:11.027433 2909621 ssh_runner.go:195] Run: openssl version
	I0914 22:53:11.033897 2909621 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0914 22:53:11.034263 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 22:53:11.045090 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 22:53:11.049114 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:53:11.049320 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:53:11.049380 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 22:53:11.057438 2909621 command_runner.go:130] > 51391683
	I0914 22:53:11.057833 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 22:53:11.069070 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 22:53:11.081219 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 22:53:11.085341 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:53:11.085566 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:53:11.085619 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 22:53:11.093539 2909621 command_runner.go:130] > 3ec20f2e
	I0914 22:53:11.093605 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:53:11.104380 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:53:11.115562 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:11.119974 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:11.120215 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:11.120269 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:11.128184 2909621 command_runner.go:130] > b5213941
	I0914 22:53:11.128656 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:53:11.139717 2909621 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:53:11.143621 2909621 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:53:11.143659 2909621 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:53:11.143722 2909621 kubeadm.go:404] StartCluster: {Name:multinode-174950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:53:11.143818 2909621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 22:53:11.143880 2909621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 22:53:11.184009 2909621 cri.go:89] found id: ""
	I0914 22:53:11.184102 2909621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 22:53:11.194245 2909621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0914 22:53:11.194309 2909621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0914 22:53:11.194324 2909621 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0914 22:53:11.194392 2909621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 22:53:11.204204 2909621 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 22:53:11.204285 2909621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 22:53:11.214094 2909621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 22:53:11.214117 2909621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 22:53:11.214127 2909621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 22:53:11.214137 2909621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:53:11.214159 2909621 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 22:53:11.214192 2909621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 22:53:11.269956 2909621 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 22:53:11.269980 2909621 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0914 22:53:11.270276 2909621 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 22:53:11.270293 2909621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 22:53:11.311916 2909621 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:53:11.311940 2909621 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:53:11.311991 2909621 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:53:11.312006 2909621 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:53:11.312044 2909621 kubeadm.go:322] OS: Linux
	I0914 22:53:11.312053 2909621 command_runner.go:130] > OS: Linux
	I0914 22:53:11.312095 2909621 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 22:53:11.312103 2909621 command_runner.go:130] > CGROUPS_CPU: enabled
	I0914 22:53:11.312147 2909621 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 22:53:11.312156 2909621 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0914 22:53:11.312199 2909621 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 22:53:11.312207 2909621 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0914 22:53:11.312251 2909621 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 22:53:11.312260 2909621 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0914 22:53:11.312304 2909621 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 22:53:11.312312 2909621 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0914 22:53:11.312357 2909621 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 22:53:11.312366 2909621 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0914 22:53:11.312408 2909621 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0914 22:53:11.312416 2909621 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0914 22:53:11.312460 2909621 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0914 22:53:11.312468 2909621 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0914 22:53:11.312520 2909621 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0914 22:53:11.312529 2909621 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0914 22:53:11.394940 2909621 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:53:11.394964 2909621 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 22:53:11.395055 2909621 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:53:11.395064 2909621 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 22:53:11.395150 2909621 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:53:11.395158 2909621 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 22:53:11.635687 2909621 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:53:11.639788 2909621 out.go:204]   - Generating certificates and keys ...
	I0914 22:53:11.635712 2909621 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 22:53:11.640050 2909621 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 22:53:11.640080 2909621 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 22:53:11.640184 2909621 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 22:53:11.640209 2909621 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 22:53:11.848375 2909621 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 22:53:11.848445 2909621 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 22:53:12.093805 2909621 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0914 22:53:12.093842 2909621 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0914 22:53:12.268692 2909621 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0914 22:53:12.268715 2909621 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0914 22:53:12.515600 2909621 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0914 22:53:12.515624 2909621 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0914 22:53:12.849275 2909621 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0914 22:53:12.849299 2909621 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0914 22:53:12.849683 2909621 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-174950] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0914 22:53:12.849699 2909621 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-174950] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0914 22:53:13.393058 2909621 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0914 22:53:13.393084 2909621 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0914 22:53:13.393361 2909621 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-174950] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0914 22:53:13.393376 2909621 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-174950] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0914 22:53:13.611140 2909621 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 22:53:13.611170 2909621 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 22:53:13.945024 2909621 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 22:53:13.945091 2909621 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 22:53:14.121338 2909621 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0914 22:53:14.121361 2909621 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0914 22:53:14.121683 2909621 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:53:14.121696 2909621 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 22:53:14.928131 2909621 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:53:14.928154 2909621 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 22:53:15.369384 2909621 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:53:15.369407 2909621 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 22:53:15.877051 2909621 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:53:15.877073 2909621 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 22:53:16.020625 2909621 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:53:16.020648 2909621 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 22:53:16.021244 2909621 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:53:16.021258 2909621 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 22:53:16.023918 2909621 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:53:16.026391 2909621 out.go:204]   - Booting up control plane ...
	I0914 22:53:16.024001 2909621 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 22:53:16.026506 2909621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:53:16.026515 2909621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 22:53:16.026618 2909621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:53:16.026624 2909621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 22:53:16.026852 2909621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:53:16.026863 2909621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 22:53:16.038335 2909621 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:53:16.038357 2909621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:53:16.040744 2909621 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:53:16.040764 2909621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:53:16.040985 2909621 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 22:53:16.040997 2909621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:53:16.151010 2909621 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:53:16.151034 2909621 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 22:53:24.652865 2909621 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.501918 seconds
	I0914 22:53:24.652888 2909621 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.501918 seconds
	I0914 22:53:24.652988 2909621 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:53:24.652994 2909621 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 22:53:24.665025 2909621 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:53:24.665052 2909621 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 22:53:25.189810 2909621 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:53:25.189836 2909621 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0914 22:53:25.190017 2909621 kubeadm.go:322] [mark-control-plane] Marking the node multinode-174950 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:53:25.190043 2909621 command_runner.go:130] > [mark-control-plane] Marking the node multinode-174950 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 22:53:25.700775 2909621 kubeadm.go:322] [bootstrap-token] Using token: upmxm8.o0u60z9asn4m47dt
	I0914 22:53:25.702992 2909621 out.go:204]   - Configuring RBAC rules ...
	I0914 22:53:25.700877 2909621 command_runner.go:130] > [bootstrap-token] Using token: upmxm8.o0u60z9asn4m47dt
	I0914 22:53:25.703103 2909621 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:53:25.703112 2909621 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 22:53:25.707502 2909621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:53:25.707523 2909621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 22:53:25.716920 2909621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:53:25.716940 2909621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 22:53:25.721831 2909621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:53:25.721856 2909621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 22:53:25.725500 2909621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:53:25.725508 2909621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 22:53:25.729632 2909621 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:53:25.729655 2909621 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 22:53:25.743542 2909621 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:53:25.743568 2909621 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 22:53:25.992123 2909621 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 22:53:25.992150 2909621 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 22:53:26.119074 2909621 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 22:53:26.119100 2909621 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 22:53:26.119106 2909621 kubeadm.go:322] 
	I0914 22:53:26.119165 2909621 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 22:53:26.119175 2909621 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0914 22:53:26.119180 2909621 kubeadm.go:322] 
	I0914 22:53:26.119251 2909621 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 22:53:26.119260 2909621 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0914 22:53:26.119264 2909621 kubeadm.go:322] 
	I0914 22:53:26.119288 2909621 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 22:53:26.119297 2909621 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0914 22:53:26.119361 2909621 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:53:26.119371 2909621 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 22:53:26.119418 2909621 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:53:26.119426 2909621 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 22:53:26.119430 2909621 kubeadm.go:322] 
	I0914 22:53:26.119481 2909621 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 22:53:26.119489 2909621 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0914 22:53:26.119493 2909621 kubeadm.go:322] 
	I0914 22:53:26.119538 2909621 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:53:26.119546 2909621 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 22:53:26.119551 2909621 kubeadm.go:322] 
	I0914 22:53:26.119600 2909621 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 22:53:26.119608 2909621 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0914 22:53:26.119678 2909621 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:53:26.119686 2909621 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 22:53:26.119750 2909621 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:53:26.119758 2909621 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 22:53:26.119762 2909621 kubeadm.go:322] 
	I0914 22:53:26.119841 2909621 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:53:26.119849 2909621 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0914 22:53:26.119921 2909621 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 22:53:26.119929 2909621 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0914 22:53:26.119933 2909621 kubeadm.go:322] 
	I0914 22:53:26.120011 2909621 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token upmxm8.o0u60z9asn4m47dt \
	I0914 22:53:26.120020 2909621 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token upmxm8.o0u60z9asn4m47dt \
	I0914 22:53:26.120117 2909621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 22:53:26.120127 2909621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 22:53:26.120147 2909621 kubeadm.go:322] 	--control-plane 
	I0914 22:53:26.120156 2909621 command_runner.go:130] > 	--control-plane 
	I0914 22:53:26.120160 2909621 kubeadm.go:322] 
	I0914 22:53:26.120240 2909621 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:53:26.120248 2909621 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0914 22:53:26.120252 2909621 kubeadm.go:322] 
	I0914 22:53:26.120329 2909621 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token upmxm8.o0u60z9asn4m47dt \
	I0914 22:53:26.120337 2909621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token upmxm8.o0u60z9asn4m47dt \
	I0914 22:53:26.120432 2909621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:53:26.120440 2909621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:53:26.124203 2909621 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:53:26.124225 2909621 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:53:26.124343 2909621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:53:26.124358 2909621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:53:26.124367 2909621 cni.go:84] Creating CNI manager for ""
	I0914 22:53:26.124378 2909621 cni.go:136] 1 nodes found, recommending kindnet
	I0914 22:53:26.127915 2909621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 22:53:26.130133 2909621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:53:26.140061 2909621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:53:26.140085 2909621 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0914 22:53:26.140093 2909621 command_runner.go:130] > Device: 3ah/58d	Inode: 2093924     Links: 1
	I0914 22:53:26.140101 2909621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:53:26.140107 2909621 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0914 22:53:26.140114 2909621 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0914 22:53:26.140120 2909621 command_runner.go:130] > Change: 2023-09-14 22:27:05.126482900 +0000
	I0914 22:53:26.140126 2909621 command_runner.go:130] >  Birth: 2023-09-14 22:27:05.082482920 +0000
	I0914 22:53:26.141666 2909621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:53:26.141680 2909621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:53:26.186077 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:53:27.007735 2909621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0914 22:53:27.018284 2909621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0914 22:53:27.026851 2909621 command_runner.go:130] > serviceaccount/kindnet created
	I0914 22:53:27.038173 2909621 command_runner.go:130] > daemonset.apps/kindnet created
	I0914 22:53:27.044245 2909621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 22:53:27.044374 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:27.044462 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=multinode-174950 minikube.k8s.io/updated_at=2023_09_14T22_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:27.177064 2909621 command_runner.go:130] > node/multinode-174950 labeled
	I0914 22:53:27.180541 2909621 command_runner.go:130] > -16
	I0914 22:53:27.180569 2909621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0914 22:53:27.180594 2909621 ops.go:34] apiserver oom_adj: -16
	I0914 22:53:27.180667 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:27.308635 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:27.308718 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:27.398579 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:27.899290 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:27.990583 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:28.399430 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:28.485057 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:28.899794 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:28.989511 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:29.398849 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:29.496327 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:29.898912 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:29.999248 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:30.398816 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:30.489632 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:30.899294 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:31.000197 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:31.399790 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:31.486599 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:31.899382 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:31.994600 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:32.398828 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:32.487329 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:32.899394 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:32.990899 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:33.399538 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:33.499114 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:33.899694 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:33.991868 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:34.399534 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:34.485655 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:34.898785 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:34.987279 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:35.398788 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:35.486279 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:35.898821 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:35.990603 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:36.399161 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:36.487278 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:36.899539 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:37.004259 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:37.399559 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:37.510626 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:37.898849 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:37.998290 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:38.399605 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:38.496528 2909621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0914 22:53:38.898832 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 22:53:39.007400 2909621 command_runner.go:130] > NAME      SECRETS   AGE
	I0914 22:53:39.007525 2909621 command_runner.go:130] > default   0         1s
	I0914 22:53:39.011023 2909621 kubeadm.go:1081] duration metric: took 11.966696322s to wait for elevateKubeSystemPrivileges.
	I0914 22:53:39.011049 2909621 kubeadm.go:406] StartCluster complete in 27.867332641s
	I0914 22:53:39.011065 2909621 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:39.011133 2909621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:53:39.011826 2909621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:39.012291 2909621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:53:39.012616 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 22:53:39.012871 2909621 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:53:39.012921 2909621 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 22:53:39.012995 2909621 addons.go:69] Setting storage-provisioner=true in profile "multinode-174950"
	I0914 22:53:39.013012 2909621 addons.go:231] Setting addon storage-provisioner=true in "multinode-174950"
	I0914 22:53:39.013063 2909621 host.go:66] Checking if "multinode-174950" exists ...
	I0914 22:53:39.013524 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:39.013967 2909621 addons.go:69] Setting default-storageclass=true in profile "multinode-174950"
	I0914 22:53:39.013987 2909621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-174950"
	I0914 22:53:39.014238 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:39.013850 2909621 kapi.go:59] client config for multinode-174950: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:53:39.015544 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:53:39.015561 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.015570 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.015577 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.015794 2909621 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 22:53:39.035193 2909621 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0914 22:53:39.035215 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.035224 2909621 round_trippers.go:580]     Audit-Id: 1f81c684-a614-486c-8d65-47fd32eabeef
	I0914 22:53:39.035231 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.035237 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.035244 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.035262 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.035276 2909621 round_trippers.go:580]     Content-Length: 291
	I0914 22:53:39.035283 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.035313 2909621 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"207ba6c6-19ae-4b3e-a152-834bf8ae55eb","resourceVersion":"352","creationTimestamp":"2023-09-14T22:53:25Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0914 22:53:39.035805 2909621 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"207ba6c6-19ae-4b3e-a152-834bf8ae55eb","resourceVersion":"352","creationTimestamp":"2023-09-14T22:53:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0914 22:53:39.035858 2909621 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:53:39.035889 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.035903 2909621 round_trippers.go:473]     Content-Type: application/json
	I0914 22:53:39.035910 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.035922 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.050163 2909621 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0914 22:53:39.050189 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.050198 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.050205 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.050211 2909621 round_trippers.go:580]     Content-Length: 291
	I0914 22:53:39.050218 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.050224 2909621 round_trippers.go:580]     Audit-Id: 90b9d912-731a-463f-8a7b-e2611c684344
	I0914 22:53:39.050234 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.050241 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.050267 2909621 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"207ba6c6-19ae-4b3e-a152-834bf8ae55eb","resourceVersion":"353","creationTimestamp":"2023-09-14T22:53:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0914 22:53:39.050423 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:53:39.050437 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.050445 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.050461 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.079269 2909621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 22:53:39.072364 2909621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:53:39.085404 2909621 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0914 22:53:39.087154 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.087167 2909621 round_trippers.go:580]     Content-Length: 291
	I0914 22:53:39.087174 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.087180 2909621 round_trippers.go:580]     Audit-Id: 733878ce-739d-477c-add6-4d980a1807e9
	I0914 22:53:39.087187 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.087193 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.087199 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.087207 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.087234 2909621 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"207ba6c6-19ae-4b3e-a152-834bf8ae55eb","resourceVersion":"353","creationTimestamp":"2023-09-14T22:53:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0914 22:53:39.087324 2909621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-174950" context rescaled to 1 replicas
	I0914 22:53:39.087350 2909621 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 22:53:39.090599 2909621 out.go:177] * Verifying Kubernetes components...
	I0914 22:53:39.087564 2909621 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:53:39.087842 2909621 kapi.go:59] client config for multinode-174950: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:53:39.093347 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:53:39.093443 2909621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 22:53:39.093487 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:39.093704 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0914 22:53:39.093712 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.093721 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.093728 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.119730 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:39.172467 2909621 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0914 22:53:39.172506 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.172516 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.172523 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.172529 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.172535 2909621 round_trippers.go:580]     Content-Length: 109
	I0914 22:53:39.172541 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.172548 2909621 round_trippers.go:580]     Audit-Id: 2e8d0d9a-08b9-46a3-a2d4-a88d5b603600
	I0914 22:53:39.172554 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.172839 2909621 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"354"},"items":[]}
	I0914 22:53:39.173078 2909621 addons.go:231] Setting addon default-storageclass=true in "multinode-174950"
	I0914 22:53:39.173115 2909621 host.go:66] Checking if "multinode-174950" exists ...
	I0914 22:53:39.173540 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:53:39.196744 2909621 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 22:53:39.196767 2909621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 22:53:39.196827 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:39.229009 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:39.302288 2909621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 22:53:39.341230 2909621 command_runner.go:130] > apiVersion: v1
	I0914 22:53:39.341247 2909621 command_runner.go:130] > data:
	I0914 22:53:39.341252 2909621 command_runner.go:130] >   Corefile: |
	I0914 22:53:39.341257 2909621 command_runner.go:130] >     .:53 {
	I0914 22:53:39.341263 2909621 command_runner.go:130] >         errors
	I0914 22:53:39.341269 2909621 command_runner.go:130] >         health {
	I0914 22:53:39.341274 2909621 command_runner.go:130] >            lameduck 5s
	I0914 22:53:39.341278 2909621 command_runner.go:130] >         }
	I0914 22:53:39.341283 2909621 command_runner.go:130] >         ready
	I0914 22:53:39.341290 2909621 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 22:53:39.341295 2909621 command_runner.go:130] >            pods insecure
	I0914 22:53:39.341302 2909621 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 22:53:39.341308 2909621 command_runner.go:130] >            ttl 30
	I0914 22:53:39.341312 2909621 command_runner.go:130] >         }
	I0914 22:53:39.341317 2909621 command_runner.go:130] >         prometheus :9153
	I0914 22:53:39.341323 2909621 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 22:53:39.341329 2909621 command_runner.go:130] >            max_concurrent 1000
	I0914 22:53:39.341333 2909621 command_runner.go:130] >         }
	I0914 22:53:39.341338 2909621 command_runner.go:130] >         cache 30
	I0914 22:53:39.341343 2909621 command_runner.go:130] >         loop
	I0914 22:53:39.341347 2909621 command_runner.go:130] >         reload
	I0914 22:53:39.341352 2909621 command_runner.go:130] >         loadbalance
	I0914 22:53:39.341356 2909621 command_runner.go:130] >     }
	I0914 22:53:39.341361 2909621 command_runner.go:130] > kind: ConfigMap
	I0914 22:53:39.341365 2909621 command_runner.go:130] > metadata:
	I0914 22:53:39.341372 2909621 command_runner.go:130] >   creationTimestamp: "2023-09-14T22:53:25Z"
	I0914 22:53:39.341377 2909621 command_runner.go:130] >   name: coredns
	I0914 22:53:39.341441 2909621 command_runner.go:130] >   namespace: kube-system
	I0914 22:53:39.341449 2909621 command_runner.go:130] >   resourceVersion: "228"
	I0914 22:53:39.341483 2909621 command_runner.go:130] >   uid: 6a510156-fa28-42ca-94d0-5aa2e5b8ac1f
	I0914 22:53:39.341630 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 22:53:39.342020 2909621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:53:39.342274 2909621 kapi.go:59] client config for multinode-174950: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:53:39.342530 2909621 node_ready.go:35] waiting up to 6m0s for node "multinode-174950" to be "Ready" ...
	I0914 22:53:39.342593 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:39.342599 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.342607 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.342614 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.384026 2909621 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0914 22:53:39.384088 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.384112 2909621 round_trippers.go:580]     Audit-Id: 13eaafbe-75c9-4692-814c-5552d4e29ef2
	I0914 22:53:39.384137 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.384172 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.384199 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.384222 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.384246 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.384598 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:39.385338 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:39.385377 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.385400 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.385425 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.420202 2909621 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0914 22:53:39.420265 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.420288 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.420314 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.420354 2909621 round_trippers.go:580]     Audit-Id: 6fce2680-4aea-451f-971c-dfc43ccac38d
	I0914 22:53:39.420381 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.420403 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.420426 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.423780 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:39.492095 2909621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 22:53:39.925254 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:39.925315 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:39.925337 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:39.925361 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:39.929853 2909621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:53:39.929913 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:39.929938 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:39.929959 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:39.929994 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:39 GMT
	I0914 22:53:39.930022 2909621 round_trippers.go:580]     Audit-Id: b1498089-f6b9-4e6e-a294-d4566ca61a16
	I0914 22:53:39.930045 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:39.930068 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:39.930210 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:40.029252 2909621 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0914 22:53:40.038297 2909621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0914 22:53:40.048045 2909621 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0914 22:53:40.057149 2909621 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0914 22:53:40.074583 2909621 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0914 22:53:40.085903 2909621 command_runner.go:130] > pod/storage-provisioner created
	I0914 22:53:40.094173 2909621 command_runner.go:130] > configmap/coredns replaced
	I0914 22:53:40.094211 2909621 start.go:917] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0914 22:53:40.094238 2909621 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0914 22:53:40.097644 2909621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 22:53:40.099398 2909621 addons.go:502] enable addons completed in 1.086469388s: enabled=[storage-provisioner default-storageclass]
	I0914 22:53:40.424430 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:40.424452 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:40.424462 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:40.424469 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:40.428057 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:53:40.428079 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:40.428088 2909621 round_trippers.go:580]     Audit-Id: a7fad9a7-924c-4b0e-98cc-fdab1313e92c
	I0914 22:53:40.428102 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:40.428109 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:40.428115 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:40.428121 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:40.428132 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:40 GMT
	I0914 22:53:40.428295 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:40.924832 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:40.924899 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:40.924937 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:40.924962 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:40.927382 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:40.927486 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:40.927523 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:40.927547 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:40.927572 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:40 GMT
	I0914 22:53:40.927608 2909621 round_trippers.go:580]     Audit-Id: a036e9cf-1a6a-4c9c-b621-7d0798d1bd79
	I0914 22:53:40.927634 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:40.927657 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:40.927790 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:41.425189 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:41.425256 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:41.425279 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:41.425302 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:41.427534 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:41.427593 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:41.427631 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:41.427655 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:41.427675 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:41 GMT
	I0914 22:53:41.427711 2909621 round_trippers.go:580]     Audit-Id: f1531844-07f7-4641-b1b5-cd667ff34e04
	I0914 22:53:41.427733 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:41.427754 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:41.428253 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:41.428742 2909621 node_ready.go:58] node "multinode-174950" has status "Ready":"False"
	I0914 22:53:41.924995 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:41.925017 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:41.925028 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:41.925035 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:41.927384 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:41.927404 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:41.927413 2909621 round_trippers.go:580]     Audit-Id: 442a8272-d2a7-4559-883f-0e4501ce85fc
	I0914 22:53:41.927420 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:41.927426 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:41.927432 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:41.927441 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:41.927453 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:41 GMT
	I0914 22:53:41.927702 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:42.424754 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:42.424776 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.424786 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.424794 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.427328 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:42.427349 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.427357 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.427363 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.427371 2909621 round_trippers.go:580]     Audit-Id: 49f816d3-17fd-4be5-b83b-2886b3b8cf53
	I0914 22:53:42.427377 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.427383 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.427390 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.427551 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"309","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0914 22:53:42.924432 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:42.924450 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.924459 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.924466 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.929892 2909621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 22:53:42.929913 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.929921 2909621 round_trippers.go:580]     Audit-Id: 56690ab4-762b-438d-bc28-ee2e2e0bdf65
	I0914 22:53:42.929927 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.929933 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.929939 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.929980 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.929989 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.933615 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:42.934050 2909621 node_ready.go:49] node "multinode-174950" has status "Ready":"True"
	I0914 22:53:42.934063 2909621 node_ready.go:38] duration metric: took 3.591517604s waiting for node "multinode-174950" to be "Ready" ...
	I0914 22:53:42.934090 2909621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:42.934186 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:53:42.934191 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.934199 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.934206 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.941643 2909621 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 22:53:42.941708 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.941730 2909621 round_trippers.go:580]     Audit-Id: d8885543-9dd5-4e7f-ac71-48bee50408ee
	I0914 22:53:42.941750 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.941785 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.941812 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.941833 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.941867 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.945243 2909621 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"398"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"394","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56434 chars]
	I0914 22:53:42.949571 2909621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:42.949712 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2xp7v
	I0914 22:53:42.949737 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.949761 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.949786 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.955066 2909621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 22:53:42.955128 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.955154 2909621 round_trippers.go:580]     Audit-Id: 0f9b5c36-9394-4675-a490-ab41b7513c43
	I0914 22:53:42.955178 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.955216 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.955241 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.955264 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.955300 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.955834 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"394","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0914 22:53:42.956436 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:42.956470 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.956520 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.956548 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.959049 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:42.959103 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.959124 2909621 round_trippers.go:580]     Audit-Id: 6ccafff1-d9a2-4029-9ebd-a62ff2dfbb81
	I0914 22:53:42.959146 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.959182 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.959207 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.959228 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.959263 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.959922 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:42.960434 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2xp7v
	I0914 22:53:42.960463 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.960483 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.960534 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.965212 2909621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:53:42.965271 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.965292 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.965325 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.965351 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.965372 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.965410 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.965434 2909621 round_trippers.go:580]     Audit-Id: 45b9c014-bcf6-4d32-b5c2-55aec92b9d0b
	I0914 22:53:42.965997 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"394","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0914 22:53:42.966617 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:42.966662 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:42.966684 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:42.966707 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:42.969005 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:42.969061 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:42.969083 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:42.969104 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:42.969140 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:42.969165 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:42 GMT
	I0914 22:53:42.969186 2909621 round_trippers.go:580]     Audit-Id: 7cafac0f-818a-493a-96cd-6b4102d7a321
	I0914 22:53:42.969224 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:42.969697 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:43.470833 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2xp7v
	I0914 22:53:43.470854 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.470864 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.470871 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.473329 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.473388 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.473410 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.473430 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.473467 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.473482 2909621 round_trippers.go:580]     Audit-Id: f64e1398-5aa8-4ccf-a89b-d3f20a421fad
	I0914 22:53:43.473489 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.473496 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.473603 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"408","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0914 22:53:43.474129 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:43.474143 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.474151 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.474159 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.476250 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.476270 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.476278 2909621 round_trippers.go:580]     Audit-Id: 802526ae-1eb9-4db9-9dff-8c96e4c661c5
	I0914 22:53:43.476284 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.476290 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.476296 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.476313 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.476324 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.476447 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:43.476860 2909621 pod_ready.go:92] pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:43.476883 2909621 pod_ready.go:81] duration metric: took 527.254527ms waiting for pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:43.476894 2909621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:43.476950 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:43.476960 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.476967 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.476974 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.479062 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.479081 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.479089 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.479095 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.479102 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.479108 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.479117 2909621 round_trippers.go:580]     Audit-Id: 1847c2d7-cf33-4e70-864d-f29ba31d4aa7
	I0914 22:53:43.479127 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.479401 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:43.479831 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:43.479847 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.479855 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.479862 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.482073 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.482145 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.482161 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.482169 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.482175 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.482182 2909621 round_trippers.go:580]     Audit-Id: 89913e46-3ac1-4d17-8a68-125b4881b94c
	I0914 22:53:43.482190 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.482197 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.482355 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:43.482791 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:43.482805 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.482814 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.482821 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.485061 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.485094 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.485102 2909621 round_trippers.go:580]     Audit-Id: 2dd206ca-3df5-4c6d-bba8-9cebd899ac03
	I0914 22:53:43.485110 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.485124 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.485131 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.485143 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.485149 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.485492 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:43.485945 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:43.485961 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.485970 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.485978 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.488090 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.488113 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.488121 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.488128 2909621 round_trippers.go:580]     Audit-Id: e09b8b15-986f-4ab7-80ac-bd917a3c2cf1
	I0914 22:53:43.488134 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.488140 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.488146 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.488156 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.488283 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:43.988953 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:43.988974 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.988984 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.988992 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.991432 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.991455 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.991463 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.991470 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.991476 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.991482 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.991489 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.991498 2909621 round_trippers.go:580]     Audit-Id: 8666688d-fd1e-4d53-b95b-7f5e4a225b39
	I0914 22:53:43.991702 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:43.992164 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:43.992178 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:43.992186 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:43.992193 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:43.994276 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:43.994291 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:43.994298 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:43.994304 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:43.994312 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:43.994318 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:43 GMT
	I0914 22:53:43.994324 2909621 round_trippers.go:580]     Audit-Id: 913c1c70-c48f-4005-aa3a-de087fe6128e
	I0914 22:53:43.994362 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:43.994542 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:44.488887 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:44.488909 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:44.488920 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:44.488927 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:44.491318 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:44.491379 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:44.491401 2909621 round_trippers.go:580]     Audit-Id: 6f26dadb-be27-414f-a730-a437e5b9b84d
	I0914 22:53:44.491464 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:44.491477 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:44.491484 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:44.491500 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:44.491517 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:44 GMT
	I0914 22:53:44.491637 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:44.492107 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:44.492124 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:44.492132 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:44.492139 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:44.494092 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:53:44.494144 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:44.494157 2909621 round_trippers.go:580]     Audit-Id: f2cdfbdf-2539-4020-bf13-481628e7286f
	I0914 22:53:44.494169 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:44.494176 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:44.494182 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:44.494188 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:44.494194 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:44 GMT
	I0914 22:53:44.494378 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:44.989002 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:44.989024 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:44.989034 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:44.989042 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:44.991591 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:44.991653 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:44.991724 2909621 round_trippers.go:580]     Audit-Id: 02f9fb13-1951-47b1-883f-8e3e0444476a
	I0914 22:53:44.991750 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:44.991772 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:44.991797 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:44.991829 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:44.991882 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:44 GMT
	I0914 22:53:44.991985 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:44.992454 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:44.992469 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:44.992477 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:44.992484 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:44.994693 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:44.994709 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:44.994717 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:44.994723 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:44.994730 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:44.994736 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:44 GMT
	I0914 22:53:44.994743 2909621 round_trippers.go:580]     Audit-Id: bb38bb67-3681-4704-8a3b-a13a4bfd6ac5
	I0914 22:53:44.994755 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:44.994941 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:45.489574 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:45.489599 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:45.489609 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:45.489616 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:45.492201 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:45.492257 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:45.492279 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:45.492301 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:45.492339 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:45 GMT
	I0914 22:53:45.492363 2909621 round_trippers.go:580]     Audit-Id: 8ff4c76b-bb2c-4278-9003-03699480d34a
	I0914 22:53:45.492384 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:45.492405 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:45.492543 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:45.493069 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:45.493086 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:45.493095 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:45.493102 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:45.495279 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:45.495296 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:45.495304 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:45 GMT
	I0914 22:53:45.495310 2909621 round_trippers.go:580]     Audit-Id: 04137655-8318-47d8-929e-797e7acf9024
	I0914 22:53:45.495316 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:45.495322 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:45.495329 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:45.495335 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:45.495460 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:45.495826 2909621 pod_ready.go:102] pod "etcd-multinode-174950" in "kube-system" namespace has status "Ready":"False"
	I0914 22:53:45.989219 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:45.989242 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:45.989253 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:45.989260 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:45.991646 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:45.991702 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:45.991724 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:45.991744 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:45.991781 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:45 GMT
	I0914 22:53:45.991809 2909621 round_trippers.go:580]     Audit-Id: 311da24e-72ef-495b-bace-3995d8ad9221
	I0914 22:53:45.991830 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:45.991845 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:45.991962 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"295","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0914 22:53:45.992418 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:45.992434 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:45.992442 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:45.992449 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:45.994440 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:53:45.994465 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:45.994474 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:45.994480 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:45.994486 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:45.994493 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:45.994507 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:45 GMT
	I0914 22:53:45.994517 2909621 round_trippers.go:580]     Audit-Id: e0382bdf-102c-467d-8be9-5deb436ba421
	I0914 22:53:45.994626 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:46.489683 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:53:46.489706 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.489716 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.489723 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.492135 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.492168 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.492190 2909621 round_trippers.go:580]     Audit-Id: e3031b9f-b617-4a67-b08f-bb732a6bce31
	I0914 22:53:46.492204 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.492211 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.492217 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.492227 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.492234 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.492435 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"416","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0914 22:53:46.492899 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:46.492915 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.492923 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.492930 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.495056 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.495078 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.495086 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.495092 2909621 round_trippers.go:580]     Audit-Id: 247f531b-4dd9-4970-8f01-3659fc4213f9
	I0914 22:53:46.495098 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.495107 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.495119 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.495126 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.495251 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:46.495623 2909621 pod_ready.go:92] pod "etcd-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:46.495645 2909621 pod_ready.go:81] duration metric: took 3.018740093s waiting for pod "etcd-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.495659 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.495715 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-174950
	I0914 22:53:46.495724 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.495732 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.495739 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.497855 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.497905 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.497920 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.497927 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.497934 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.497940 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.497950 2909621 round_trippers.go:580]     Audit-Id: b782e4fe-3bee-42a3-b442-ed1d012b3d9f
	I0914 22:53:46.497957 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.498225 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-174950","namespace":"kube-system","uid":"ac1ba3ae-0fb3-4999-b147-5ff333a2f947","resourceVersion":"417","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b753d9f03819cd7363b6eb842fa0c58c","kubernetes.io/config.mirror":"b753d9f03819cd7363b6eb842fa0c58c","kubernetes.io/config.seen":"2023-09-14T22:53:26.037768859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0914 22:53:46.498739 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:46.498753 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.498762 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.498774 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.500774 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:53:46.500794 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.500802 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.500824 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.500836 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.500842 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.500854 2909621 round_trippers.go:580]     Audit-Id: 03c5e544-fc26-47c1-a150-a9c5639b2793
	I0914 22:53:46.500860 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.500946 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:46.501300 2909621 pod_ready.go:92] pod "kube-apiserver-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:46.501317 2909621 pod_ready.go:81] duration metric: took 5.647578ms waiting for pod "kube-apiserver-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.501327 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.501396 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-174950
	I0914 22:53:46.501407 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.501415 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.501422 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.503536 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.503557 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.503565 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.503572 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.503578 2909621 round_trippers.go:580]     Audit-Id: 48ebd2db-36d0-47d5-a030-cd66588ee941
	I0914 22:53:46.503585 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.503593 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.503605 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.503745 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-174950","namespace":"kube-system","uid":"50b26397-695e-4c44-a4dd-a7bc43801d89","resourceVersion":"418","creationTimestamp":"2023-09-14T22:53:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5438d3937ab25683562f3af80faa8102","kubernetes.io/config.mirror":"5438d3937ab25683562f3af80faa8102","kubernetes.io/config.seen":"2023-09-14T22:53:17.725687635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0914 22:53:46.525429 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:46.525463 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.525473 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.525480 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.527910 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.527935 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.527944 2909621 round_trippers.go:580]     Audit-Id: e095e720-dd87-4183-8020-0c53feebe261
	I0914 22:53:46.527950 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.527956 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.527962 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.527969 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.527979 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.528081 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:46.528462 2909621 pod_ready.go:92] pod "kube-controller-manager-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:46.528478 2909621 pod_ready.go:81] duration metric: took 27.144773ms waiting for pod "kube-controller-manager-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.528489 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hfqpz" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.724922 2909621 request.go:629] Waited for 196.347876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hfqpz
	I0914 22:53:46.725001 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hfqpz
	I0914 22:53:46.725007 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.725016 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.725024 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.727595 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.727625 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.727633 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.727641 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.727647 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.727653 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.727660 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.727669 2909621 round_trippers.go:580]     Audit-Id: 00942858-ff39-421d-be89-92efa0331d62
	I0914 22:53:46.727780 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hfqpz","generateName":"kube-proxy-","namespace":"kube-system","uid":"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b","resourceVersion":"379","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0914 22:53:46.924527 2909621 request.go:629] Waited for 196.252377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:46.924603 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:46.924613 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:46.924629 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:46.924641 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:46.927020 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:46.927043 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:46.927054 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:46.927062 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:46.927068 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:46 GMT
	I0914 22:53:46.927074 2909621 round_trippers.go:580]     Audit-Id: 86579e32-3b9d-4d84-86d3-c741ce3c1e5c
	I0914 22:53:46.927081 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:46.927087 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:46.927223 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:46.927628 2909621 pod_ready.go:92] pod "kube-proxy-hfqpz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:46.927644 2909621 pod_ready.go:81] duration metric: took 399.133959ms waiting for pod "kube-proxy-hfqpz" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:46.927656 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:47.125047 2909621 request.go:629] Waited for 197.321094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-174950
	I0914 22:53:47.125136 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-174950
	I0914 22:53:47.125147 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.125156 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.125164 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.127520 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:47.127538 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.127546 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.127552 2909621 round_trippers.go:580]     Audit-Id: b44a15e4-6a03-4ad8-b3f1-1b8f23cdc5ec
	I0914 22:53:47.127558 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.127564 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.127570 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.127577 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.127719 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-174950","namespace":"kube-system","uid":"48c4d4fe-c814-4ab5-b17b-569f9c6bad4e","resourceVersion":"415","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"51fd9c2dfaf5c9ce7ec648e63e4635cd","kubernetes.io/config.mirror":"51fd9c2dfaf5c9ce7ec648e63e4635cd","kubernetes.io/config.seen":"2023-09-14T22:53:26.037771296Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0914 22:53:47.325447 2909621 request.go:629] Waited for 197.28422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:47.325505 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:53:47.325510 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.325527 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.325537 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.327920 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:47.327945 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.327963 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.327970 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.327976 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.327984 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.327990 2909621 round_trippers.go:580]     Audit-Id: ec5f1c88-a1f4-4118-aeae-a38cbbab8242
	I0914 22:53:47.328001 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.328103 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0914 22:53:47.328486 2909621 pod_ready.go:92] pod "kube-scheduler-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:53:47.328526 2909621 pod_ready.go:81] duration metric: took 400.85903ms waiting for pod "kube-scheduler-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:53:47.328544 2909621 pod_ready.go:38] duration metric: took 4.394442482s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:53:47.328562 2909621 api_server.go:52] waiting for apiserver process to appear ...
	I0914 22:53:47.328624 2909621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:53:47.339919 2909621 command_runner.go:130] > 1274
	I0914 22:53:47.343376 2909621 api_server.go:72] duration metric: took 8.255997667s to wait for apiserver process to appear ...
	I0914 22:53:47.343394 2909621 api_server.go:88] waiting for apiserver healthz status ...
	I0914 22:53:47.343410 2909621 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0914 22:53:47.353814 2909621 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0914 22:53:47.353881 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0914 22:53:47.353890 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.353900 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.353910 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.354974 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:53:47.354992 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.355000 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.355006 2909621 round_trippers.go:580]     Content-Length: 263
	I0914 22:53:47.355013 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.355019 2909621 round_trippers.go:580]     Audit-Id: 535e36fe-a699-4c03-9a3a-6ed80c5c78cf
	I0914 22:53:47.355026 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.355036 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.355042 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.355067 2909621 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0914 22:53:47.355151 2909621 api_server.go:141] control plane version: v1.28.1
	I0914 22:53:47.355162 2909621 api_server.go:131] duration metric: took 11.762602ms to wait for apiserver health ...
	I0914 22:53:47.355169 2909621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 22:53:47.524473 2909621 request.go:629] Waited for 169.236736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:53:47.524557 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:53:47.524568 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.524577 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.524586 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.528186 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:53:47.528214 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.528223 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.528230 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.528236 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.528242 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.528249 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.528259 2909621 round_trippers.go:580]     Audit-Id: 7e83b8ee-1754-4f8e-8f23-7dd09cd4ba6c
	I0914 22:53:47.529418 2909621 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"408","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0914 22:53:47.535266 2909621 system_pods.go:59] 8 kube-system pods found
	I0914 22:53:47.535336 2909621 system_pods.go:61] "coredns-5dd5756b68-2xp7v" [76ecbab3-e96d-4c2e-be1e-21bed9f04965] Running
	I0914 22:53:47.535358 2909621 system_pods.go:61] "etcd-multinode-174950" [a51d6460-f0b3-4961-8e4d-323c3036cbc0] Running
	I0914 22:53:47.535387 2909621 system_pods.go:61] "kindnet-x8mln" [b0b0e2b5-0d63-45d9-95e4-6a75fc24e367] Running
	I0914 22:53:47.535426 2909621 system_pods.go:61] "kube-apiserver-multinode-174950" [ac1ba3ae-0fb3-4999-b147-5ff333a2f947] Running
	I0914 22:53:47.535447 2909621 system_pods.go:61] "kube-controller-manager-multinode-174950" [50b26397-695e-4c44-a4dd-a7bc43801d89] Running
	I0914 22:53:47.535483 2909621 system_pods.go:61] "kube-proxy-hfqpz" [44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b] Running
	I0914 22:53:47.535508 2909621 system_pods.go:61] "kube-scheduler-multinode-174950" [48c4d4fe-c814-4ab5-b17b-569f9c6bad4e] Running
	I0914 22:53:47.535529 2909621 system_pods.go:61] "storage-provisioner" [6fd7dc96-c3be-4061-9503-3553207816e2] Running
	I0914 22:53:47.535553 2909621 system_pods.go:74] duration metric: took 180.377818ms to wait for pod list to return data ...
	I0914 22:53:47.535584 2909621 default_sa.go:34] waiting for default service account to be created ...
	I0914 22:53:47.724543 2909621 request.go:629] Waited for 188.839007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:53:47.724611 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 22:53:47.724620 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.724629 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.724641 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.727212 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:47.727231 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.727240 2909621 round_trippers.go:580]     Content-Length: 261
	I0914 22:53:47.727247 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.727253 2909621 round_trippers.go:580]     Audit-Id: d853745f-1939-44b5-a69b-e140279c5cac
	I0914 22:53:47.727259 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.727295 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.727308 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.727315 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.727338 2909621 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6504ca05-41b9-44da-9be2-b4a56c706da6","resourceVersion":"317","creationTimestamp":"2023-09-14T22:53:38Z"}}]}
	I0914 22:53:47.727545 2909621 default_sa.go:45] found service account: "default"
	I0914 22:53:47.727564 2909621 default_sa.go:55] duration metric: took 191.95871ms for default service account to be created ...
	I0914 22:53:47.727573 2909621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 22:53:47.924908 2909621 request.go:629] Waited for 197.268622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:53:47.924966 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:53:47.924972 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:47.924986 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:47.924997 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:47.928376 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:53:47.928401 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:47.928409 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:47.928416 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:47.928422 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:47 GMT
	I0914 22:53:47.928428 2909621 round_trippers.go:580]     Audit-Id: 4c2be2cb-03b0-42fa-a80c-ba0b64b8c532
	I0914 22:53:47.928434 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:47.928441 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:47.929088 2909621 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"408","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0914 22:53:47.931422 2909621 system_pods.go:86] 8 kube-system pods found
	I0914 22:53:47.931444 2909621 system_pods.go:89] "coredns-5dd5756b68-2xp7v" [76ecbab3-e96d-4c2e-be1e-21bed9f04965] Running
	I0914 22:53:47.931452 2909621 system_pods.go:89] "etcd-multinode-174950" [a51d6460-f0b3-4961-8e4d-323c3036cbc0] Running
	I0914 22:53:47.931457 2909621 system_pods.go:89] "kindnet-x8mln" [b0b0e2b5-0d63-45d9-95e4-6a75fc24e367] Running
	I0914 22:53:47.931469 2909621 system_pods.go:89] "kube-apiserver-multinode-174950" [ac1ba3ae-0fb3-4999-b147-5ff333a2f947] Running
	I0914 22:53:47.931479 2909621 system_pods.go:89] "kube-controller-manager-multinode-174950" [50b26397-695e-4c44-a4dd-a7bc43801d89] Running
	I0914 22:53:47.931485 2909621 system_pods.go:89] "kube-proxy-hfqpz" [44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b] Running
	I0914 22:53:47.931490 2909621 system_pods.go:89] "kube-scheduler-multinode-174950" [48c4d4fe-c814-4ab5-b17b-569f9c6bad4e] Running
	I0914 22:53:47.931497 2909621 system_pods.go:89] "storage-provisioner" [6fd7dc96-c3be-4061-9503-3553207816e2] Running
	I0914 22:53:47.931504 2909621 system_pods.go:126] duration metric: took 203.923059ms to wait for k8s-apps to be running ...
	I0914 22:53:47.931513 2909621 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:53:47.931571 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:53:47.944689 2909621 system_svc.go:56] duration metric: took 13.164506ms WaitForService to wait for kubelet.
	I0914 22:53:47.944714 2909621 kubeadm.go:581] duration metric: took 8.857342293s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:53:47.944735 2909621 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:53:48.125105 2909621 request.go:629] Waited for 180.296332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0914 22:53:48.125162 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0914 22:53:48.125168 2909621 round_trippers.go:469] Request Headers:
	I0914 22:53:48.125177 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:53:48.125212 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:53:48.127566 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:53:48.127588 2909621 round_trippers.go:577] Response Headers:
	I0914 22:53:48.127596 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:53:48.127604 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:53:48.127623 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:53:48.127632 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:53:48.127639 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:53:48 GMT
	I0914 22:53:48.127651 2909621 round_trippers.go:580]     Audit-Id: 09521c18-a04f-4b45-95b9-1a5b81c4ac43
	I0914 22:53:48.127824 2909621 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"388","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0914 22:53:48.128265 2909621 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:53:48.128289 2909621 node_conditions.go:123] node cpu capacity is 2
	I0914 22:53:48.128301 2909621 node_conditions.go:105] duration metric: took 183.560766ms to run NodePressure ...
	I0914 22:53:48.128312 2909621 start.go:228] waiting for startup goroutines ...
	I0914 22:53:48.128319 2909621 start.go:233] waiting for cluster config update ...
	I0914 22:53:48.128332 2909621 start.go:242] writing updated cluster config ...
	I0914 22:53:48.131221 2909621 out.go:177] 
	I0914 22:53:48.133298 2909621 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:53:48.133385 2909621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json ...
	I0914 22:53:48.135913 2909621 out.go:177] * Starting worker node multinode-174950-m02 in cluster multinode-174950
	I0914 22:53:48.137728 2909621 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:53:48.139561 2909621 out.go:177] * Pulling base image ...
	I0914 22:53:48.142276 2909621 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:53:48.142303 2909621 cache.go:57] Caching tarball of preloaded images
	I0914 22:53:48.142373 2909621 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:53:48.142470 2909621 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 22:53:48.142507 2909621 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:53:48.142634 2909621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json ...
	I0914 22:53:48.159596 2909621 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 22:53:48.159625 2909621 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 22:53:48.159647 2909621 cache.go:195] Successfully downloaded all kic artifacts
	I0914 22:53:48.159676 2909621 start.go:365] acquiring machines lock for multinode-174950-m02: {Name:mk206e063fd415997f8f55b464c233eb39853136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 22:53:48.159805 2909621 start.go:369] acquired machines lock for "multinode-174950-m02" in 106.372µs
	I0914 22:53:48.159836 2909621 start.go:93] Provisioning new machine with config: &{Name:multinode-174950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:53:48.159921 2909621 start.go:125] createHost starting for "m02" (driver="docker")
	I0914 22:53:48.162711 2909621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0914 22:53:48.162817 2909621 start.go:159] libmachine.API.Create for "multinode-174950" (driver="docker")
	I0914 22:53:48.162842 2909621 client.go:168] LocalClient.Create starting
	I0914 22:53:48.162908 2909621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 22:53:48.162947 2909621 main.go:141] libmachine: Decoding PEM data...
	I0914 22:53:48.162967 2909621 main.go:141] libmachine: Parsing certificate...
	I0914 22:53:48.163028 2909621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 22:53:48.163051 2909621 main.go:141] libmachine: Decoding PEM data...
	I0914 22:53:48.163065 2909621 main.go:141] libmachine: Parsing certificate...
	I0914 22:53:48.163324 2909621 cli_runner.go:164] Run: docker network inspect multinode-174950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:53:48.181320 2909621 network_create.go:76] Found existing network {name:multinode-174950 subnet:0x40015a1bc0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0914 22:53:48.181365 2909621 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-174950-m02" container
	I0914 22:53:48.181440 2909621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 22:53:48.198401 2909621 cli_runner.go:164] Run: docker volume create multinode-174950-m02 --label name.minikube.sigs.k8s.io=multinode-174950-m02 --label created_by.minikube.sigs.k8s.io=true
	I0914 22:53:48.216892 2909621 oci.go:103] Successfully created a docker volume multinode-174950-m02
	I0914 22:53:48.216979 2909621 cli_runner.go:164] Run: docker run --rm --name multinode-174950-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-174950-m02 --entrypoint /usr/bin/test -v multinode-174950-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -d /var/lib
	I0914 22:53:48.895726 2909621 oci.go:107] Successfully prepared a docker volume multinode-174950-m02
	I0914 22:53:48.895764 2909621 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:53:48.895795 2909621 kic.go:190] Starting extracting preloaded images to volume ...
	I0914 22:53:48.895880 2909621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-174950-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 22:53:53.102161 2909621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-174950-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 -I lz4 -xf /preloaded.tar -C /extractDir: (4.206240826s)
	I0914 22:53:53.102193 2909621 kic.go:199] duration metric: took 4.206395 seconds to extract preloaded images to volume
	W0914 22:53:53.102336 2909621 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 22:53:53.102451 2909621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 22:53:53.171330 2909621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-174950-m02 --name multinode-174950-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-174950-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-174950-m02 --network multinode-174950 --ip 192.168.58.3 --volume multinode-174950-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
	I0914 22:53:53.533952 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Running}}
	I0914 22:53:53.556808 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Status}}
	I0914 22:53:53.584758 2909621 cli_runner.go:164] Run: docker exec multinode-174950-m02 stat /var/lib/dpkg/alternatives/iptables
	I0914 22:53:53.669332 2909621 oci.go:144] the created container "multinode-174950-m02" has a running status.
	I0914 22:53:53.669363 2909621 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa...
	I0914 22:53:54.167634 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0914 22:53:54.167728 2909621 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 22:53:54.214860 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Status}}
	I0914 22:53:54.249822 2909621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 22:53:54.249840 2909621 kic_runner.go:114] Args: [docker exec --privileged multinode-174950-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 22:53:54.337867 2909621 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Status}}
	I0914 22:53:54.371490 2909621 machine.go:88] provisioning docker machine ...
	I0914 22:53:54.371519 2909621 ubuntu.go:169] provisioning hostname "multinode-174950-m02"
	I0914 22:53:54.371587 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:54.399718 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:54.400126 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36468 <nil> <nil>}
	I0914 22:53:54.400138 2909621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-174950-m02 && echo "multinode-174950-m02" | sudo tee /etc/hostname
	I0914 22:53:54.606189 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-174950-m02
	
	I0914 22:53:54.606357 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:54.644326 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:54.644782 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36468 <nil> <nil>}
	I0914 22:53:54.644810 2909621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-174950-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-174950-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-174950-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 22:53:54.798151 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 22:53:54.798180 2909621 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 22:53:54.798197 2909621 ubuntu.go:177] setting up certificates
	I0914 22:53:54.798205 2909621 provision.go:83] configureAuth start
	I0914 22:53:54.798269 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950-m02
	I0914 22:53:54.823694 2909621 provision.go:138] copyHostCerts
	I0914 22:53:54.823733 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:53:54.823767 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 22:53:54.823774 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 22:53:54.823844 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 22:53:54.823919 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:53:54.823935 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 22:53:54.823940 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 22:53:54.823964 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 22:53:54.824002 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:53:54.824017 2909621 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 22:53:54.824021 2909621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 22:53:54.824043 2909621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 22:53:54.824084 2909621 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.multinode-174950-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-174950-m02]
	I0914 22:53:56.222121 2909621 provision.go:172] copyRemoteCerts
	I0914 22:53:56.222251 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 22:53:56.222320 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:56.244075 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:53:56.347340 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 22:53:56.347400 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 22:53:56.379981 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 22:53:56.380040 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 22:53:56.410699 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 22:53:56.410800 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 22:53:56.440778 2909621 provision.go:86] duration metric: configureAuth took 1.642558107s
	I0914 22:53:56.440806 2909621 ubuntu.go:193] setting minikube options for container-runtime
	I0914 22:53:56.441018 2909621 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:53:56.441125 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:56.461636 2909621 main.go:141] libmachine: Using SSH client type: native
	I0914 22:53:56.462080 2909621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36468 <nil> <nil>}
	I0914 22:53:56.462171 2909621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 22:53:56.726428 2909621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 22:53:56.726502 2909621 machine.go:91] provisioned docker machine in 2.354992541s
	I0914 22:53:56.726530 2909621 client.go:171] LocalClient.Create took 8.563681386s
	I0914 22:53:56.726558 2909621 start.go:167] duration metric: libmachine.API.Create for "multinode-174950" took 8.563740832s
	I0914 22:53:56.726595 2909621 start.go:300] post-start starting for "multinode-174950-m02" (driver="docker")
	I0914 22:53:56.726617 2909621 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 22:53:56.726731 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 22:53:56.726818 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:56.747590 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:53:56.851761 2909621 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 22:53:56.856036 2909621 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0914 22:53:56.856058 2909621 command_runner.go:130] > NAME="Ubuntu"
	I0914 22:53:56.856068 2909621 command_runner.go:130] > VERSION_ID="22.04"
	I0914 22:53:56.856075 2909621 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0914 22:53:56.856081 2909621 command_runner.go:130] > VERSION_CODENAME=jammy
	I0914 22:53:56.856085 2909621 command_runner.go:130] > ID=ubuntu
	I0914 22:53:56.856090 2909621 command_runner.go:130] > ID_LIKE=debian
	I0914 22:53:56.856096 2909621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0914 22:53:56.856103 2909621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0914 22:53:56.856112 2909621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0914 22:53:56.856121 2909621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0914 22:53:56.856129 2909621 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0914 22:53:56.856174 2909621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 22:53:56.856204 2909621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 22:53:56.856218 2909621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 22:53:56.856229 2909621 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 22:53:56.856239 2909621 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 22:53:56.856296 2909621 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 22:53:56.856377 2909621 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 22:53:56.856386 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /etc/ssl/certs/28461092.pem
	I0914 22:53:56.856485 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 22:53:56.867459 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:53:56.897361 2909621 start.go:303] post-start completed in 170.739119ms
	I0914 22:53:56.897748 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950-m02
	I0914 22:53:56.916718 2909621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/config.json ...
	I0914 22:53:56.917099 2909621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:53:56.917154 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:56.942161 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:53:57.042721 2909621 command_runner.go:130] > 12%!
	(MISSING)I0914 22:53:57.042797 2909621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 22:53:57.052465 2909621 command_runner.go:130] > 173G
	I0914 22:53:57.053022 2909621 start.go:128] duration metric: createHost completed in 8.893087032s
	I0914 22:53:57.053042 2909621 start.go:83] releasing machines lock for "multinode-174950-m02", held for 8.893224969s
	I0914 22:53:57.053119 2909621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950-m02
	I0914 22:53:57.074512 2909621 out.go:177] * Found network options:
	I0914 22:53:57.076317 2909621 out.go:177]   - NO_PROXY=192.168.58.2
	W0914 22:53:57.078468 2909621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 22:53:57.078529 2909621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 22:53:57.078600 2909621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 22:53:57.078647 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:57.078933 2909621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 22:53:57.078987 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:53:57.103061 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:53:57.116680 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:53:57.348908 2909621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 22:53:57.400825 2909621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 22:53:57.406368 2909621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0914 22:53:57.406431 2909621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0914 22:53:57.406454 2909621 command_runner.go:130] > Device: b3h/179d	Inode: 2089567     Links: 1
	I0914 22:53:57.406470 2909621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:53:57.406491 2909621 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0914 22:53:57.406499 2909621 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0914 22:53:57.406505 2909621 command_runner.go:130] > Change: 2023-09-14 22:27:04.470483202 +0000
	I0914 22:53:57.406516 2909621 command_runner.go:130] >  Birth: 2023-09-14 22:27:04.470483202 +0000
	I0914 22:53:57.406603 2909621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:53:57.431001 2909621 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 22:53:57.431080 2909621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 22:53:57.472279 2909621 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0914 22:53:57.472316 2909621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 22:53:57.472325 2909621 start.go:469] detecting cgroup driver to use...
	I0914 22:53:57.472355 2909621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 22:53:57.472412 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 22:53:57.492103 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 22:53:57.506954 2909621 docker.go:196] disabling cri-docker service (if available) ...
	I0914 22:53:57.507017 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 22:53:57.522912 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 22:53:57.539936 2909621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 22:53:57.642780 2909621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 22:53:57.749480 2909621 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0914 22:53:57.749508 2909621 docker.go:212] disabling docker service ...
	I0914 22:53:57.749575 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 22:53:57.771460 2909621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 22:53:57.785862 2909621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 22:53:57.803514 2909621 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0914 22:53:57.889492 2909621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 22:53:58.001473 2909621 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0914 22:53:58.001545 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 22:53:58.015510 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 22:53:58.036193 2909621 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 22:53:58.037852 2909621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 22:53:58.037928 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:58.052859 2909621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 22:53:58.052943 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:58.065619 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:58.078867 2909621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 22:53:58.091873 2909621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 22:53:58.104392 2909621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 22:53:58.113411 2909621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 22:53:58.114420 2909621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 22:53:58.125333 2909621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 22:53:58.229659 2909621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 22:53:58.351667 2909621 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 22:53:58.351749 2909621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 22:53:58.356782 2909621 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 22:53:58.356816 2909621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 22:53:58.356824 2909621 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0914 22:53:58.356833 2909621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:53:58.356839 2909621 command_runner.go:130] > Access: 2023-09-14 22:53:58.329333477 +0000
	I0914 22:53:58.356846 2909621 command_runner.go:130] > Modify: 2023-09-14 22:53:58.329333477 +0000
	I0914 22:53:58.356852 2909621 command_runner.go:130] > Change: 2023-09-14 22:53:58.329333477 +0000
	I0914 22:53:58.356861 2909621 command_runner.go:130] >  Birth: -
	I0914 22:53:58.357132 2909621 start.go:537] Will wait 60s for crictl version
	I0914 22:53:58.357200 2909621 ssh_runner.go:195] Run: which crictl
	I0914 22:53:58.360922 2909621 command_runner.go:130] > /usr/bin/crictl
	I0914 22:53:58.361394 2909621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 22:53:58.413218 2909621 command_runner.go:130] > Version:  0.1.0
	I0914 22:53:58.413241 2909621 command_runner.go:130] > RuntimeName:  cri-o
	I0914 22:53:58.413248 2909621 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0914 22:53:58.413255 2909621 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 22:53:58.415898 2909621 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 22:53:58.415994 2909621 ssh_runner.go:195] Run: crio --version
	I0914 22:53:58.458374 2909621 command_runner.go:130] > crio version 1.24.6
	I0914 22:53:58.458435 2909621 command_runner.go:130] > Version:          1.24.6
	I0914 22:53:58.458450 2909621 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0914 22:53:58.458457 2909621 command_runner.go:130] > GitTreeState:     clean
	I0914 22:53:58.458464 2909621 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0914 22:53:58.458470 2909621 command_runner.go:130] > GoVersion:        go1.18.2
	I0914 22:53:58.458484 2909621 command_runner.go:130] > Compiler:         gc
	I0914 22:53:58.458490 2909621 command_runner.go:130] > Platform:         linux/arm64
	I0914 22:53:58.458499 2909621 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:53:58.458508 2909621 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:53:58.458514 2909621 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:53:58.458519 2909621 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:53:58.458613 2909621 ssh_runner.go:195] Run: crio --version
	I0914 22:53:58.498638 2909621 command_runner.go:130] > crio version 1.24.6
	I0914 22:53:58.498660 2909621 command_runner.go:130] > Version:          1.24.6
	I0914 22:53:58.498670 2909621 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0914 22:53:58.498675 2909621 command_runner.go:130] > GitTreeState:     clean
	I0914 22:53:58.498683 2909621 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0914 22:53:58.498689 2909621 command_runner.go:130] > GoVersion:        go1.18.2
	I0914 22:53:58.498694 2909621 command_runner.go:130] > Compiler:         gc
	I0914 22:53:58.498704 2909621 command_runner.go:130] > Platform:         linux/arm64
	I0914 22:53:58.498710 2909621 command_runner.go:130] > Linkmode:         dynamic
	I0914 22:53:58.498720 2909621 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0914 22:53:58.498730 2909621 command_runner.go:130] > SeccompEnabled:   true
	I0914 22:53:58.498735 2909621 command_runner.go:130] > AppArmorEnabled:  false
	I0914 22:53:58.503040 2909621 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0914 22:53:58.505041 2909621 out.go:177]   - env NO_PROXY=192.168.58.2
	I0914 22:53:58.506892 2909621 cli_runner.go:164] Run: docker network inspect multinode-174950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 22:53:58.524892 2909621 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0914 22:53:58.530323 2909621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:53:58.543516 2909621 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950 for IP: 192.168.58.3
	I0914 22:53:58.543548 2909621 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:53:58.543678 2909621 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 22:53:58.543727 2909621 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 22:53:58.543742 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 22:53:58.543757 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 22:53:58.543771 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 22:53:58.543782 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 22:53:58.543836 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 22:53:58.543869 2909621 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 22:53:58.543883 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 22:53:58.543908 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 22:53:58.543937 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 22:53:58.543963 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 22:53:58.544010 2909621 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 22:53:58.544041 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:58.544056 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem -> /usr/share/ca-certificates/2846109.pem
	I0914 22:53:58.544068 2909621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> /usr/share/ca-certificates/28461092.pem
	I0914 22:53:58.544393 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 22:53:58.573172 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 22:53:58.600471 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 22:53:58.628075 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 22:53:58.657653 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 22:53:58.686268 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 22:53:58.714172 2909621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 22:53:58.742067 2909621 ssh_runner.go:195] Run: openssl version
	I0914 22:53:58.748747 2909621 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0914 22:53:58.749126 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 22:53:58.760572 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:58.765183 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:58.765227 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:58.765278 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 22:53:58.773268 2909621 command_runner.go:130] > b5213941
	I0914 22:53:58.773705 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 22:53:58.784998 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 22:53:58.796151 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 22:53:58.800733 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:53:58.800764 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 22:53:58.800819 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 22:53:58.809093 2909621 command_runner.go:130] > 51391683
	I0914 22:53:58.809521 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 22:53:58.821978 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 22:53:58.833472 2909621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 22:53:58.837899 2909621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:53:58.838005 2909621 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 22:53:58.838080 2909621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 22:53:58.846077 2909621 command_runner.go:130] > 3ec20f2e
	I0914 22:53:58.846208 2909621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 22:53:58.857491 2909621 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 22:53:58.861716 2909621 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:53:58.861792 2909621 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0914 22:53:58.861900 2909621 ssh_runner.go:195] Run: crio config
	I0914 22:53:58.913635 2909621 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 22:53:58.913661 2909621 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 22:53:58.913670 2909621 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 22:53:58.913674 2909621 command_runner.go:130] > #
	I0914 22:53:58.913683 2909621 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 22:53:58.913691 2909621 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 22:53:58.913709 2909621 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 22:53:58.913722 2909621 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 22:53:58.913727 2909621 command_runner.go:130] > # reload'.
	I0914 22:53:58.913739 2909621 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 22:53:58.913747 2909621 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 22:53:58.913755 2909621 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 22:53:58.913766 2909621 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 22:53:58.913777 2909621 command_runner.go:130] > [crio]
	I0914 22:53:58.913787 2909621 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 22:53:58.913796 2909621 command_runner.go:130] > # containers images, in this directory.
	I0914 22:53:58.913811 2909621 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0914 22:53:58.913819 2909621 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 22:53:58.913830 2909621 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0914 22:53:58.913838 2909621 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 22:53:58.913857 2909621 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 22:53:58.913863 2909621 command_runner.go:130] > # storage_driver = "vfs"
	I0914 22:53:58.913871 2909621 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 22:53:58.913880 2909621 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 22:53:58.914191 2909621 command_runner.go:130] > # storage_option = [
	I0914 22:53:58.914206 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.914215 2909621 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 22:53:58.914223 2909621 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 22:53:58.914960 2909621 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 22:53:58.914990 2909621 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 22:53:58.915000 2909621 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 22:53:58.915006 2909621 command_runner.go:130] > # always happen on a node reboot
	I0914 22:53:58.915013 2909621 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 22:53:58.915023 2909621 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 22:53:58.915035 2909621 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 22:53:58.915047 2909621 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 22:53:58.915065 2909621 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0914 22:53:58.915077 2909621 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 22:53:58.915088 2909621 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 22:53:58.915096 2909621 command_runner.go:130] > # internal_wipe = true
	I0914 22:53:58.915103 2909621 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 22:53:58.915110 2909621 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 22:53:58.915120 2909621 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 22:53:58.915128 2909621 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 22:53:58.915147 2909621 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 22:53:58.915156 2909621 command_runner.go:130] > [crio.api]
	I0914 22:53:58.915163 2909621 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 22:53:58.915171 2909621 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 22:53:58.915178 2909621 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 22:53:58.915186 2909621 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 22:53:58.915194 2909621 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 22:53:58.915201 2909621 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 22:53:58.915215 2909621 command_runner.go:130] > # stream_port = "0"
	I0914 22:53:58.915228 2909621 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 22:53:58.915234 2909621 command_runner.go:130] > # stream_enable_tls = false
	I0914 22:53:58.915244 2909621 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 22:53:58.915250 2909621 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 22:53:58.915261 2909621 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 22:53:58.915269 2909621 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 22:53:58.915276 2909621 command_runner.go:130] > # minutes.
	I0914 22:53:58.915281 2909621 command_runner.go:130] > # stream_tls_cert = ""
	I0914 22:53:58.915295 2909621 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 22:53:58.915306 2909621 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 22:53:58.915312 2909621 command_runner.go:130] > # stream_tls_key = ""
	I0914 22:53:58.915322 2909621 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 22:53:58.915330 2909621 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 22:53:58.915340 2909621 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 22:53:58.915346 2909621 command_runner.go:130] > # stream_tls_ca = ""
	I0914 22:53:58.915363 2909621 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:53:58.915373 2909621 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0914 22:53:58.915389 2909621 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0914 22:53:58.915397 2909621 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0914 22:53:58.915427 2909621 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 22:53:58.915447 2909621 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 22:53:58.915454 2909621 command_runner.go:130] > [crio.runtime]
	I0914 22:53:58.915462 2909621 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 22:53:58.915474 2909621 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 22:53:58.915480 2909621 command_runner.go:130] > # "nofile=1024:2048"
	I0914 22:53:58.915491 2909621 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 22:53:58.915496 2909621 command_runner.go:130] > # default_ulimits = [
	I0914 22:53:58.915503 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.915517 2909621 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 22:53:58.915526 2909621 command_runner.go:130] > # no_pivot = false
	I0914 22:53:58.915533 2909621 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 22:53:58.915541 2909621 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 22:53:58.915550 2909621 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 22:53:58.915558 2909621 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 22:53:58.915569 2909621 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 22:53:58.915578 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:53:58.915593 2909621 command_runner.go:130] > # conmon = ""
	I0914 22:53:58.915604 2909621 command_runner.go:130] > # Cgroup setting for conmon
	I0914 22:53:58.915613 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 22:53:58.915622 2909621 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 22:53:58.915630 2909621 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 22:53:58.915639 2909621 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 22:53:58.915648 2909621 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 22:53:58.915656 2909621 command_runner.go:130] > # conmon_env = [
	I0914 22:53:58.915667 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.915677 2909621 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 22:53:58.915684 2909621 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 22:53:58.915695 2909621 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 22:53:58.915700 2909621 command_runner.go:130] > # default_env = [
	I0914 22:53:58.915707 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.915714 2909621 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 22:53:58.915721 2909621 command_runner.go:130] > # selinux = false
	I0914 22:53:58.915729 2909621 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 22:53:58.915746 2909621 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 22:53:58.915756 2909621 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 22:53:58.915761 2909621 command_runner.go:130] > # seccomp_profile = ""
	I0914 22:53:58.915771 2909621 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 22:53:58.915778 2909621 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 22:53:58.915789 2909621 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 22:53:58.915795 2909621 command_runner.go:130] > # which might increase security.
	I0914 22:53:58.915803 2909621 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0914 22:53:58.915817 2909621 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 22:53:58.915828 2909621 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 22:53:58.915836 2909621 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 22:53:58.915847 2909621 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 22:53:58.915853 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:58.915862 2909621 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 22:53:58.915869 2909621 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 22:53:58.915877 2909621 command_runner.go:130] > # the cgroup blockio controller.
	I0914 22:53:58.915889 2909621 command_runner.go:130] > # blockio_config_file = ""
	I0914 22:53:58.915899 2909621 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 22:53:58.915904 2909621 command_runner.go:130] > # irqbalance daemon.
	I0914 22:53:58.915911 2909621 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 22:53:58.915922 2909621 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 22:53:58.915929 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:58.915938 2909621 command_runner.go:130] > # rdt_config_file = ""
	I0914 22:53:58.915946 2909621 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 22:53:58.915954 2909621 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 22:53:58.915968 2909621 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 22:53:58.915977 2909621 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 22:53:58.915985 2909621 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 22:53:58.915992 2909621 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 22:53:58.916000 2909621 command_runner.go:130] > # will be added.
	I0914 22:53:58.916006 2909621 command_runner.go:130] > # default_capabilities = [
	I0914 22:53:58.916010 2909621 command_runner.go:130] > # 	"CHOWN",
	I0914 22:53:58.916017 2909621 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 22:53:58.916029 2909621 command_runner.go:130] > # 	"FSETID",
	I0914 22:53:58.916039 2909621 command_runner.go:130] > # 	"FOWNER",
	I0914 22:53:58.916048 2909621 command_runner.go:130] > # 	"SETGID",
	I0914 22:53:58.916053 2909621 command_runner.go:130] > # 	"SETUID",
	I0914 22:53:58.916058 2909621 command_runner.go:130] > # 	"SETPCAP",
	I0914 22:53:58.916066 2909621 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 22:53:58.916071 2909621 command_runner.go:130] > # 	"KILL",
	I0914 22:53:58.916075 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916087 2909621 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 22:53:58.916097 2909621 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 22:53:58.916451 2909621 command_runner.go:130] > # add_inheritable_capabilities = true
	I0914 22:53:58.916471 2909621 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 22:53:58.916480 2909621 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:53:58.916509 2909621 command_runner.go:130] > # default_sysctls = [
	I0914 22:53:58.916514 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916521 2909621 command_runner.go:130] > # List of devices on the host that a
	I0914 22:53:58.916532 2909621 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 22:53:58.916537 2909621 command_runner.go:130] > # allowed_devices = [
	I0914 22:53:58.916545 2909621 command_runner.go:130] > # 	"/dev/fuse",
	I0914 22:53:58.916550 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916556 2909621 command_runner.go:130] > # List of additional devices. specified as
	I0914 22:53:58.916584 2909621 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 22:53:58.916595 2909621 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 22:53:58.916602 2909621 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 22:53:58.916611 2909621 command_runner.go:130] > # additional_devices = [
	I0914 22:53:58.916615 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916622 2909621 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 22:53:58.916631 2909621 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 22:53:58.916637 2909621 command_runner.go:130] > # 	"/etc/cdi",
	I0914 22:53:58.916645 2909621 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 22:53:58.916651 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916666 2909621 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 22:53:58.916678 2909621 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 22:53:58.916683 2909621 command_runner.go:130] > # Defaults to false.
	I0914 22:53:58.916697 2909621 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 22:53:58.916705 2909621 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 22:53:58.916716 2909621 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 22:53:58.916722 2909621 command_runner.go:130] > # hooks_dir = [
	I0914 22:53:58.916733 2909621 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 22:53:58.916741 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.916749 2909621 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 22:53:58.916757 2909621 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 22:53:58.916766 2909621 command_runner.go:130] > # its default mounts from the following two files:
	I0914 22:53:58.916773 2909621 command_runner.go:130] > #
	I0914 22:53:58.916781 2909621 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 22:53:58.916789 2909621 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 22:53:58.916800 2909621 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 22:53:58.916810 2909621 command_runner.go:130] > #
	I0914 22:53:58.916821 2909621 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 22:53:58.916830 2909621 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 22:53:58.916840 2909621 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 22:53:58.916846 2909621 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 22:53:58.916850 2909621 command_runner.go:130] > #
	I0914 22:53:58.916858 2909621 command_runner.go:130] > # default_mounts_file = ""
	I0914 22:53:58.916868 2909621 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 22:53:58.916882 2909621 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 22:53:58.916891 2909621 command_runner.go:130] > # pids_limit = 0
	I0914 22:53:58.916899 2909621 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 22:53:58.916909 2909621 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 22:53:58.916917 2909621 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 22:53:58.916930 2909621 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 22:53:58.916935 2909621 command_runner.go:130] > # log_size_max = -1
	I0914 22:53:58.916944 2909621 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0914 22:53:58.916957 2909621 command_runner.go:130] > # log_to_journald = false
	I0914 22:53:58.916966 2909621 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 22:53:58.916976 2909621 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 22:53:58.916983 2909621 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 22:53:58.916993 2909621 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 22:53:58.916999 2909621 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 22:53:58.917007 2909621 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 22:53:58.917015 2909621 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 22:53:58.917020 2909621 command_runner.go:130] > # read_only = false
	I0914 22:53:58.917037 2909621 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 22:53:58.917049 2909621 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 22:53:58.917056 2909621 command_runner.go:130] > # live configuration reload.
	I0914 22:53:58.917064 2909621 command_runner.go:130] > # log_level = "info"
	I0914 22:53:58.917071 2909621 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 22:53:58.917080 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:58.917084 2909621 command_runner.go:130] > # log_filter = ""
	I0914 22:53:58.917095 2909621 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 22:53:58.917109 2909621 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 22:53:58.917117 2909621 command_runner.go:130] > # separated by comma.
	I0914 22:53:58.917123 2909621 command_runner.go:130] > # uid_mappings = ""
	I0914 22:53:58.917130 2909621 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 22:53:58.917141 2909621 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 22:53:58.917146 2909621 command_runner.go:130] > # separated by comma.
	I0914 22:53:58.917154 2909621 command_runner.go:130] > # gid_mappings = ""
	I0914 22:53:58.917161 2909621 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 22:53:58.917172 2909621 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:53:58.917187 2909621 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:53:58.917197 2909621 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 22:53:58.917204 2909621 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 22:53:58.917216 2909621 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 22:53:58.917246 2909621 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 22:53:58.917265 2909621 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 22:53:58.917276 2909621 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 22:53:58.917284 2909621 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 22:53:58.917296 2909621 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 22:53:58.917605 2909621 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 22:53:58.917625 2909621 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 22:53:58.917633 2909621 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 22:53:58.917643 2909621 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 22:53:58.917697 2909621 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 22:53:58.917711 2909621 command_runner.go:130] > # drop_infra_ctr = true
	I0914 22:53:58.917719 2909621 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 22:53:58.917728 2909621 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 22:53:58.917758 2909621 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 22:53:58.917771 2909621 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 22:53:58.917779 2909621 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 22:53:58.917785 2909621 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 22:53:58.917795 2909621 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 22:53:58.917804 2909621 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 22:53:58.917812 2909621 command_runner.go:130] > # pinns_path = ""
	I0914 22:53:58.917830 2909621 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 22:53:58.917842 2909621 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0914 22:53:58.917851 2909621 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0914 22:53:58.917859 2909621 command_runner.go:130] > # default_runtime = "runc"
	I0914 22:53:58.917866 2909621 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 22:53:58.917875 2909621 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 22:53:58.917889 2909621 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0914 22:53:58.917901 2909621 command_runner.go:130] > # creation as a file is not desired either.
	I0914 22:53:58.917916 2909621 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 22:53:58.917922 2909621 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 22:53:58.917931 2909621 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 22:53:58.917935 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.917949 2909621 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 22:53:58.917958 2909621 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 22:53:58.917975 2909621 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0914 22:53:58.917986 2909621 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0914 22:53:58.917995 2909621 command_runner.go:130] > #
	I0914 22:53:58.918001 2909621 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0914 22:53:58.918007 2909621 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0914 22:53:58.918015 2909621 command_runner.go:130] > #  runtime_type = "oci"
	I0914 22:53:58.918021 2909621 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0914 22:53:58.918029 2909621 command_runner.go:130] > #  privileged_without_host_devices = false
	I0914 22:53:58.918036 2909621 command_runner.go:130] > #  allowed_annotations = []
	I0914 22:53:58.918043 2909621 command_runner.go:130] > # Where:
	I0914 22:53:58.918061 2909621 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0914 22:53:58.918069 2909621 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0914 22:53:58.918079 2909621 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 22:53:58.918091 2909621 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 22:53:58.918097 2909621 command_runner.go:130] > #   in $PATH.
	I0914 22:53:58.918108 2909621 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0914 22:53:58.918114 2909621 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 22:53:58.918131 2909621 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0914 22:53:58.918140 2909621 command_runner.go:130] > #   state.
	I0914 22:53:58.918148 2909621 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 22:53:58.918155 2909621 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 22:53:58.918166 2909621 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 22:53:58.918173 2909621 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 22:53:58.918184 2909621 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 22:53:58.918193 2909621 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 22:53:58.918208 2909621 command_runner.go:130] > #   The currently recognized values are:
	I0914 22:53:58.918216 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 22:53:58.918228 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 22:53:58.918235 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 22:53:58.918243 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 22:53:58.918255 2909621 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 22:53:58.918263 2909621 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 22:53:58.918359 2909621 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 22:53:58.918376 2909621 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0914 22:53:58.918383 2909621 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 22:53:58.918392 2909621 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 22:53:58.918399 2909621 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0914 22:53:58.918407 2909621 command_runner.go:130] > runtime_type = "oci"
	I0914 22:53:58.918412 2909621 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 22:53:58.918417 2909621 command_runner.go:130] > runtime_config_path = ""
	I0914 22:53:58.918425 2909621 command_runner.go:130] > monitor_path = ""
	I0914 22:53:58.918437 2909621 command_runner.go:130] > monitor_cgroup = ""
	I0914 22:53:58.918447 2909621 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 22:53:58.918494 2909621 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0914 22:53:58.918510 2909621 command_runner.go:130] > # running containers
	I0914 22:53:58.918519 2909621 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0914 22:53:58.918530 2909621 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0914 22:53:58.918539 2909621 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0914 22:53:58.918549 2909621 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0914 22:53:58.918556 2909621 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0914 22:53:58.918565 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0914 22:53:58.918572 2909621 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0914 22:53:58.918592 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0914 22:53:58.918602 2909621 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0914 22:53:58.918608 2909621 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0914 22:53:58.918619 2909621 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 22:53:58.918629 2909621 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 22:53:58.918637 2909621 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 22:53:58.918650 2909621 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 22:53:58.918666 2909621 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 22:53:58.918677 2909621 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 22:53:58.918688 2909621 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 22:53:58.918702 2909621 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 22:53:58.918709 2909621 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 22:53:58.918721 2909621 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 22:53:58.918726 2909621 command_runner.go:130] > # Example:
	I0914 22:53:58.918741 2909621 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 22:53:58.918748 2909621 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 22:53:58.918756 2909621 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 22:53:58.918763 2909621 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 22:53:58.918770 2909621 command_runner.go:130] > # cpuset = 0
	I0914 22:53:58.918775 2909621 command_runner.go:130] > # cpushares = "0-1"
	I0914 22:53:58.918780 2909621 command_runner.go:130] > # Where:
	I0914 22:53:58.918786 2909621 command_runner.go:130] > # The workload name is workload-type.
	I0914 22:53:58.918798 2909621 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 22:53:58.918813 2909621 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 22:53:58.918823 2909621 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 22:53:58.918835 2909621 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 22:53:58.918847 2909621 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 22:53:58.918852 2909621 command_runner.go:130] > # 
	I0914 22:53:58.918862 2909621 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 22:53:58.918867 2909621 command_runner.go:130] > #
	I0914 22:53:58.918875 2909621 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 22:53:58.918895 2909621 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 22:53:58.918907 2909621 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 22:53:58.918916 2909621 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 22:53:58.918926 2909621 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 22:53:58.918932 2909621 command_runner.go:130] > [crio.image]
	I0914 22:53:58.918942 2909621 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 22:53:58.918948 2909621 command_runner.go:130] > # default_transport = "docker://"
	I0914 22:53:58.918955 2909621 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 22:53:58.918969 2909621 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:53:58.918978 2909621 command_runner.go:130] > # global_auth_file = ""
	I0914 22:53:58.918985 2909621 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 22:53:58.918993 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:58.919001 2909621 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0914 22:53:58.919010 2909621 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 22:53:58.919019 2909621 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 22:53:58.919026 2909621 command_runner.go:130] > # This option supports live configuration reload.
	I0914 22:53:58.919031 2909621 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 22:53:58.919044 2909621 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 22:53:58.919055 2909621 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 22:53:58.919063 2909621 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 22:53:58.919073 2909621 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 22:53:58.919079 2909621 command_runner.go:130] > # pause_command = "/pause"
	I0914 22:53:58.919163 2909621 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 22:53:58.919180 2909621 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 22:53:58.919190 2909621 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 22:53:58.919221 2909621 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 22:53:58.919234 2909621 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 22:53:58.919240 2909621 command_runner.go:130] > # signature_policy = ""
	I0914 22:53:58.919250 2909621 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 22:53:58.919259 2909621 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 22:53:58.919266 2909621 command_runner.go:130] > # changing them here.
	I0914 22:53:58.919272 2909621 command_runner.go:130] > # insecure_registries = [
	I0914 22:53:58.919276 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.919294 2909621 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 22:53:58.919304 2909621 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 22:53:58.919309 2909621 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 22:53:58.919319 2909621 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 22:53:58.919325 2909621 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 22:53:58.919335 2909621 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 22:53:58.919340 2909621 command_runner.go:130] > # CNI plugins.
	I0914 22:53:58.919350 2909621 command_runner.go:130] > [crio.network]
	I0914 22:53:58.919366 2909621 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 22:53:58.919376 2909621 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 22:53:58.919381 2909621 command_runner.go:130] > # cni_default_network = ""
	I0914 22:53:58.919391 2909621 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 22:53:58.919397 2909621 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 22:53:58.919406 2909621 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 22:53:58.919411 2909621 command_runner.go:130] > # plugin_dirs = [
	I0914 22:53:58.919419 2909621 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 22:53:58.919424 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.919431 2909621 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 22:53:58.919445 2909621 command_runner.go:130] > [crio.metrics]
	I0914 22:53:58.919451 2909621 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 22:53:58.919457 2909621 command_runner.go:130] > # enable_metrics = false
	I0914 22:53:58.919465 2909621 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 22:53:58.919472 2909621 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 22:53:58.919481 2909621 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 22:53:58.919492 2909621 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 22:53:58.919500 2909621 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 22:53:58.919508 2909621 command_runner.go:130] > # metrics_collectors = [
	I0914 22:53:58.919518 2909621 command_runner.go:130] > # 	"operations",
	I0914 22:53:58.919528 2909621 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 22:53:58.919534 2909621 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 22:53:58.919539 2909621 command_runner.go:130] > # 	"operations_errors",
	I0914 22:53:58.919545 2909621 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 22:53:58.919552 2909621 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 22:53:58.919558 2909621 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 22:53:58.919564 2909621 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 22:53:58.919571 2909621 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 22:53:58.919577 2909621 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 22:53:58.919594 2909621 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 22:53:58.919603 2909621 command_runner.go:130] > # 	"containers_oom_total",
	I0914 22:53:58.919608 2909621 command_runner.go:130] > # 	"containers_oom",
	I0914 22:53:58.919614 2909621 command_runner.go:130] > # 	"processes_defunct",
	I0914 22:53:58.919619 2909621 command_runner.go:130] > # 	"operations_total",
	I0914 22:53:58.919627 2909621 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 22:53:58.919634 2909621 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 22:53:58.919642 2909621 command_runner.go:130] > # 	"operations_errors_total",
	I0914 22:53:58.919648 2909621 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 22:53:58.919656 2909621 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 22:53:58.919667 2909621 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 22:53:58.919676 2909621 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 22:53:58.919682 2909621 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 22:53:58.919690 2909621 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 22:53:58.919694 2909621 command_runner.go:130] > # ]
	I0914 22:53:58.919701 2909621 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 22:53:58.919709 2909621 command_runner.go:130] > # metrics_port = 9090
	I0914 22:53:58.919716 2909621 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 22:53:58.919723 2909621 command_runner.go:130] > # metrics_socket = ""
	I0914 22:53:58.919730 2909621 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 22:53:58.919746 2909621 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 22:53:58.919756 2909621 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 22:53:58.919763 2909621 command_runner.go:130] > # certificate on any modification event.
	I0914 22:53:58.919771 2909621 command_runner.go:130] > # metrics_cert = ""
	I0914 22:53:58.919777 2909621 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 22:53:58.919784 2909621 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 22:53:58.919789 2909621 command_runner.go:130] > # metrics_key = ""
	I0914 22:53:58.919799 2909621 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 22:53:58.919804 2909621 command_runner.go:130] > [crio.tracing]
	I0914 22:53:58.919819 2909621 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 22:53:58.919828 2909621 command_runner.go:130] > # enable_tracing = false
	I0914 22:53:58.919835 2909621 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 22:53:58.919843 2909621 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 22:53:58.919850 2909621 command_runner.go:130] > # Number of samples to collect per million spans.
	I0914 22:53:58.919858 2909621 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 22:53:58.919866 2909621 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 22:53:58.919871 2909621 command_runner.go:130] > [crio.stats]
	I0914 22:53:58.919879 2909621 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 22:53:58.919893 2909621 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 22:53:58.919902 2909621 command_runner.go:130] > # stats_collection_period = 0
	I0914 22:53:58.922205 2909621 command_runner.go:130] ! time="2023-09-14 22:53:58.910679715Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0914 22:53:58.922230 2909621 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 22:53:58.922286 2909621 cni.go:84] Creating CNI manager for ""
	I0914 22:53:58.922293 2909621 cni.go:136] 2 nodes found, recommending kindnet
	I0914 22:53:58.922302 2909621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 22:53:58.922320 2909621 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-174950 NodeName:multinode-174950-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 22:53:58.922433 2909621 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-174950-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 22:53:58.922491 2909621 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-174950-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 22:53:58.922559 2909621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 22:53:58.932197 2909621 command_runner.go:130] > kubeadm
	I0914 22:53:58.932219 2909621 command_runner.go:130] > kubectl
	I0914 22:53:58.932225 2909621 command_runner.go:130] > kubelet
	I0914 22:53:58.933336 2909621 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 22:53:58.933416 2909621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0914 22:53:58.944085 2909621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0914 22:53:58.966224 2909621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 22:53:58.987800 2909621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0914 22:53:58.993130 2909621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 22:53:59.006524 2909621 host.go:66] Checking if "multinode-174950" exists ...
	I0914 22:53:59.006788 2909621 start.go:304] JoinCluster: &{Name:multinode-174950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-174950 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:53:59.006875 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 22:53:59.006926 2909621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:53:59.007299 2909621 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:53:59.026452 2909621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:53:59.203463 2909621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 8qqyrx.ir78fsj1d6dr99de --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 22:53:59.207435 2909621 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:53:59.207515 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8qqyrx.ir78fsj1d6dr99de --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-174950-m02"
	I0914 22:53:59.252344 2909621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0914 22:53:59.293396 2909621 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0914 22:53:59.293459 2909621 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0914 22:53:59.293481 2909621 command_runner.go:130] > OS: Linux
	I0914 22:53:59.293511 2909621 command_runner.go:130] > CGROUPS_CPU: enabled
	I0914 22:53:59.293535 2909621 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0914 22:53:59.293552 2909621 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0914 22:53:59.293560 2909621 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0914 22:53:59.293566 2909621 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0914 22:53:59.293572 2909621 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0914 22:53:59.293580 2909621 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0914 22:53:59.293587 2909621 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0914 22:53:59.293596 2909621 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0914 22:53:59.406441 2909621 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0914 22:53:59.406816 2909621 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0914 22:53:59.439372 2909621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 22:53:59.439652 2909621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 22:53:59.439691 2909621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 22:53:59.547354 2909621 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0914 22:54:03.065462 2909621 command_runner.go:130] > This node has joined the cluster:
	I0914 22:54:03.065485 2909621 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0914 22:54:03.065493 2909621 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0914 22:54:03.065502 2909621 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0914 22:54:03.068696 2909621 command_runner.go:130] ! W0914 22:53:59.251757    1020 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0914 22:54:03.068738 2909621 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 22:54:03.068753 2909621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 22:54:03.068773 2909621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8qqyrx.ir78fsj1d6dr99de --discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-174950-m02": (3.861229701s)
	I0914 22:54:03.068806 2909621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 22:54:03.187376 2909621 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0914 22:54:03.301201 2909621 start.go:306] JoinCluster complete in 4.29440562s
	I0914 22:54:03.301226 2909621 cni.go:84] Creating CNI manager for ""
	I0914 22:54:03.301232 2909621 cni.go:136] 2 nodes found, recommending kindnet
	I0914 22:54:03.301296 2909621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 22:54:03.306091 2909621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 22:54:03.306154 2909621 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0914 22:54:03.306176 2909621 command_runner.go:130] > Device: 3ah/58d	Inode: 2093924     Links: 1
	I0914 22:54:03.306190 2909621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 22:54:03.306198 2909621 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0914 22:54:03.306204 2909621 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0914 22:54:03.306210 2909621 command_runner.go:130] > Change: 2023-09-14 22:27:05.126482900 +0000
	I0914 22:54:03.306217 2909621 command_runner.go:130] >  Birth: 2023-09-14 22:27:05.082482920 +0000
	I0914 22:54:03.306272 2909621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 22:54:03.306296 2909621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 22:54:03.328066 2909621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 22:54:03.657940 2909621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:54:03.662949 2909621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 22:54:03.669200 2909621 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 22:54:03.699229 2909621 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 22:54:03.704650 2909621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:54:03.704924 2909621 kapi.go:59] client config for multinode-174950: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:54:03.705293 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 22:54:03.705302 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:03.705310 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:03.705317 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:03.708683 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:54:03.708765 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:03.708790 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:03 GMT
	I0914 22:54:03.708809 2909621 round_trippers.go:580]     Audit-Id: 2ff985b4-fdc3-4179-8d8e-81b87e53d1ad
	I0914 22:54:03.708831 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:03.708862 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:03.708884 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:03.708902 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:03.708923 2909621 round_trippers.go:580]     Content-Length: 291
	I0914 22:54:03.709144 2909621 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"207ba6c6-19ae-4b3e-a152-834bf8ae55eb","resourceVersion":"412","creationTimestamp":"2023-09-14T22:53:25Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 22:54:03.709248 2909621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-174950" context rescaled to 1 replicas
	I0914 22:54:03.709273 2909621 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0914 22:54:03.711822 2909621 out.go:177] * Verifying Kubernetes components...
	I0914 22:54:03.714007 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:03.733876 2909621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:54:03.734207 2909621 kapi.go:59] client config for multinode-174950: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/multinode-174950/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 22:54:03.734554 2909621 node_ready.go:35] waiting up to 6m0s for node "multinode-174950-m02" to be "Ready" ...
	I0914 22:54:03.734644 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:03.734676 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:03.734703 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:03.734726 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:03.737186 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:03.737234 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:03.737256 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:03.737280 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:03.737317 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:03.737341 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:03 GMT
	I0914 22:54:03.737361 2909621 round_trippers.go:580]     Audit-Id: e4079b79-168b-4651-8043-c00aea156421
	I0914 22:54:03.737387 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:03.737959 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:03.738436 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:03.738469 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:03.738500 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:03.738521 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:03.740716 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:03.740762 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:03.740783 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:03.740803 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:03.740838 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:03.740862 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:03.740882 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:03 GMT
	I0914 22:54:03.740903 2909621 round_trippers.go:580]     Audit-Id: 8fcee520-fd61-4f84-ba33-519e3146b287
	I0914 22:54:03.741375 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:04.242495 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:04.242514 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:04.242524 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:04.242531 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:04.244971 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:04.244997 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:04.245005 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:04.245011 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:04.245035 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:04.245042 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:04.245053 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:04 GMT
	I0914 22:54:04.245060 2909621 round_trippers.go:580]     Audit-Id: c8495041-7be6-48e0-a8a8-6ab2e91e318f
	I0914 22:54:04.245259 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:04.742755 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:04.742780 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:04.742791 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:04.742798 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:04.745125 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:04.745148 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:04.745157 2909621 round_trippers.go:580]     Audit-Id: e75a7199-17dd-427f-a2a1-40f0c09deace
	I0914 22:54:04.745163 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:04.745169 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:04.745176 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:04.745188 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:04.745198 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:04 GMT
	I0914 22:54:04.745580 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:05.242701 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:05.242730 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:05.242741 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:05.242748 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:05.245271 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:05.245292 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:05.245300 2909621 round_trippers.go:580]     Audit-Id: 1069ada5-4088-4e14-af40-30df06844db3
	I0914 22:54:05.245307 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:05.245313 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:05.245319 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:05.245325 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:05.245333 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:05 GMT
	I0914 22:54:05.245892 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:05.741924 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:05.741947 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:05.741957 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:05.741964 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:05.744378 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:05.744398 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:05.744407 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:05.744413 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:05 GMT
	I0914 22:54:05.744419 2909621 round_trippers.go:580]     Audit-Id: 813e5e84-8d4b-4e5d-9eff-e73f4ba8f1e0
	I0914 22:54:05.744427 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:05.744433 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:05.744439 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:05.744593 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:05.744955 2909621 node_ready.go:58] node "multinode-174950-m02" has status "Ready":"False"
	I0914 22:54:06.241975 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:06.241998 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.242009 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.242016 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.246348 2909621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:54:06.246369 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.246377 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.246384 2909621 round_trippers.go:580]     Audit-Id: 2ad4816b-9b0a-4b9c-9560-e7b98d6cb3bc
	I0914 22:54:06.246390 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.246397 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.246404 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.246410 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.246606 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"458","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:0
2Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0914 22:54:06.742721 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:06.742741 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.742752 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.742759 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.745197 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.745220 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.745228 2909621 round_trippers.go:580]     Audit-Id: 092ffd0a-4ddb-4546-80ed-5fd171b54b32
	I0914 22:54:06.745234 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.745242 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.745249 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.745255 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.745262 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.745364 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"477","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0914 22:54:06.745736 2909621 node_ready.go:49] node "multinode-174950-m02" has status "Ready":"True"
	I0914 22:54:06.745753 2909621 node_ready.go:38] duration metric: took 3.011162426s waiting for node "multinode-174950-m02" to be "Ready" ...
	I0914 22:54:06.745762 2909621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:54:06.745823 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0914 22:54:06.745834 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.745842 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.745848 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.749266 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:54:06.749292 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.749301 2909621 round_trippers.go:580]     Audit-Id: 811664d5-215f-4925-8c23-e85f9213317c
	I0914 22:54:06.749307 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.749313 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.749320 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.749330 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.749337 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.749909 2909621 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"480"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"408","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0914 22:54:06.752842 2909621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.752923 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2xp7v
	I0914 22:54:06.752935 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.752944 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.752951 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.755097 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.755113 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.755121 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.755127 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.755133 2909621 round_trippers.go:580]     Audit-Id: 5b1cde4e-9fcb-4d37-a2d8-6591bb236342
	I0914 22:54:06.755140 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.755146 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.755153 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.755265 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2xp7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"76ecbab3-e96d-4c2e-be1e-21bed9f04965","resourceVersion":"408","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"012cf8a3-f2fd-4aae-a00d-05f7d523e904","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"012cf8a3-f2fd-4aae-a00d-05f7d523e904\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0914 22:54:06.755758 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:06.755769 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.755777 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.755784 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.757748 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:54:06.757762 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.757770 2909621 round_trippers.go:580]     Audit-Id: 1bc88572-2cc7-4812-8e26-e75e34f82425
	I0914 22:54:06.757776 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.757782 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.757788 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.757794 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.757800 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.757922 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:06.758321 2909621 pod_ready.go:92] pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:06.758333 2909621 pod_ready.go:81] duration metric: took 5.467631ms waiting for pod "coredns-5dd5756b68-2xp7v" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.758344 2909621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.758394 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-174950
	I0914 22:54:06.758399 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.758406 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.758412 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.760308 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:54:06.760362 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.760383 2909621 round_trippers.go:580]     Audit-Id: 68a9efed-bcae-44df-9d8b-fd0a1bd4f099
	I0914 22:54:06.760390 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.760397 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.760404 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.760410 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.760439 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.760826 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-174950","namespace":"kube-system","uid":"a51d6460-f0b3-4961-8e4d-323c3036cbc0","resourceVersion":"416","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.mirror":"9e1ae37e211ff16d31b1032ed5657d55","kubernetes.io/config.seen":"2023-09-14T22:53:26.037763657Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0914 22:54:06.761269 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:06.761285 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.761293 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.761300 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.763285 2909621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 22:54:06.763344 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.763366 2909621 round_trippers.go:580]     Audit-Id: b79118c2-b18d-4781-bfe3-37601bb92452
	I0914 22:54:06.763387 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.763424 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.763438 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.763444 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.763451 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.763585 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:06.763967 2909621 pod_ready.go:92] pod "etcd-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:06.763985 2909621 pod_ready.go:81] duration metric: took 5.634688ms waiting for pod "etcd-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.764000 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.764050 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-174950
	I0914 22:54:06.764069 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.764077 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.764087 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.766766 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.766800 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.766809 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.766818 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.766824 2909621 round_trippers.go:580]     Audit-Id: 9f5c960e-aebd-4e3b-a050-03d072aac504
	I0914 22:54:06.766834 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.766840 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.766858 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.767021 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-174950","namespace":"kube-system","uid":"ac1ba3ae-0fb3-4999-b147-5ff333a2f947","resourceVersion":"417","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b753d9f03819cd7363b6eb842fa0c58c","kubernetes.io/config.mirror":"b753d9f03819cd7363b6eb842fa0c58c","kubernetes.io/config.seen":"2023-09-14T22:53:26.037768859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0914 22:54:06.767565 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:06.767580 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.767588 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.767595 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.769763 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.769790 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.769798 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.769804 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.769811 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.769824 2909621 round_trippers.go:580]     Audit-Id: e2510bb3-7d36-4f66-896b-a9185bd4286e
	I0914 22:54:06.769836 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.769842 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.770083 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:06.770459 2909621 pod_ready.go:92] pod "kube-apiserver-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:06.770469 2909621 pod_ready.go:81] duration metric: took 6.462478ms waiting for pod "kube-apiserver-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.770490 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.770545 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-174950
	I0914 22:54:06.770549 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.770558 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.770566 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.772610 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.772631 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.772639 2909621 round_trippers.go:580]     Audit-Id: c9dbd819-268f-463c-8cd1-e2f06847bd3e
	I0914 22:54:06.772646 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.772652 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.772658 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.772667 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.772685 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.772992 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-174950","namespace":"kube-system","uid":"50b26397-695e-4c44-a4dd-a7bc43801d89","resourceVersion":"418","creationTimestamp":"2023-09-14T22:53:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5438d3937ab25683562f3af80faa8102","kubernetes.io/config.mirror":"5438d3937ab25683562f3af80faa8102","kubernetes.io/config.seen":"2023-09-14T22:53:17.725687635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0914 22:54:06.773491 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:06.773505 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.773514 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.773521 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.775655 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:06.775676 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.775685 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.775691 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.775697 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.775706 2909621 round_trippers.go:580]     Audit-Id: a187cdb0-083d-44a2-ac3e-083e4da9fc3b
	I0914 22:54:06.775717 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.775724 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.775929 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:06.776296 2909621 pod_ready.go:92] pod "kube-controller-manager-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:06.776312 2909621 pod_ready.go:81] duration metric: took 5.812764ms waiting for pod "kube-controller-manager-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.776322 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnzw9" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:06.943687 2909621 request.go:629] Waited for 167.300344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bnzw9
	I0914 22:54:06.943753 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bnzw9
	I0914 22:54:06.943763 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:06.943772 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:06.943780 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:06.947277 2909621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 22:54:06.947296 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:06.947304 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:06.947311 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:06.947317 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:06.947324 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:06 GMT
	I0914 22:54:06.947331 2909621 round_trippers.go:580]     Audit-Id: 1671c992-2482-42fe-9304-91a14eeacb39
	I0914 22:54:06.947337 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:06.947499 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bnzw9","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1603251-4752-4512-af9b-ff3613c8d086","resourceVersion":"469","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0914 22:54:07.143312 2909621 request.go:629] Waited for 195.313941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:07.143390 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950-m02
	I0914 22:54:07.143416 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:07.143430 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:07.143438 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:07.145915 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:07.145974 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:07.145997 2909621 round_trippers.go:580]     Audit-Id: ae34fc63-20a7-4bbb-bbf5-b42dca88d037
	I0914 22:54:07.146020 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:07.146052 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:07.146078 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:07.146099 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:07.146120 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:07 GMT
	I0914 22:54:07.146260 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950-m02","uid":"c43caac2-1bf2-43d6-8107-1f6f5e0162a6","resourceVersion":"477","creationTimestamp":"2023-09-14T22:54:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0914 22:54:07.146676 2909621 pod_ready.go:92] pod "kube-proxy-bnzw9" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:07.146694 2909621 pod_ready.go:81] duration metric: took 370.355925ms waiting for pod "kube-proxy-bnzw9" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:07.146708 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hfqpz" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:07.343123 2909621 request.go:629] Waited for 196.347449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hfqpz
	I0914 22:54:07.343276 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hfqpz
	I0914 22:54:07.343288 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:07.343297 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:07.343307 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:07.347540 2909621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 22:54:07.347572 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:07.347580 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:07.347589 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:07.347596 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:07.347602 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:07 GMT
	I0914 22:54:07.347608 2909621 round_trippers.go:580]     Audit-Id: a356c545-a0f8-4f26-a4ec-4aa3475a6561
	I0914 22:54:07.347619 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:07.347731 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hfqpz","generateName":"kube-proxy-","namespace":"kube-system","uid":"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b","resourceVersion":"379","creationTimestamp":"2023-09-14T22:53:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8a2f152-91bb-4cf3-bcec-3cf0c6c4708c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0914 22:54:07.543548 2909621 request.go:629] Waited for 195.318355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:07.543620 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:07.543626 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:07.543641 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:07.543651 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:07.546085 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:07.546110 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:07.546119 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:07.546126 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:07.546132 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:07.546140 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:07.546158 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:07 GMT
	I0914 22:54:07.546165 2909621 round_trippers.go:580]     Audit-Id: c246ffbc-490b-4d8c-a9ee-6e609d467915
	I0914 22:54:07.546288 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:07.546725 2909621 pod_ready.go:92] pod "kube-proxy-hfqpz" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:07.546744 2909621 pod_ready.go:81] duration metric: took 400.023708ms waiting for pod "kube-proxy-hfqpz" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:07.546756 2909621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:07.743072 2909621 request.go:629] Waited for 196.250818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-174950
	I0914 22:54:07.743161 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-174950
	I0914 22:54:07.743172 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:07.743181 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:07.743189 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:07.745600 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:07.745621 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:07.745629 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:07.745636 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:07.745664 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:07 GMT
	I0914 22:54:07.745679 2909621 round_trippers.go:580]     Audit-Id: 6f63b150-65c5-4246-bc0e-6ba6461b14a9
	I0914 22:54:07.745686 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:07.745691 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:07.745816 2909621 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-174950","namespace":"kube-system","uid":"48c4d4fe-c814-4ab5-b17b-569f9c6bad4e","resourceVersion":"415","creationTimestamp":"2023-09-14T22:53:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"51fd9c2dfaf5c9ce7ec648e63e4635cd","kubernetes.io/config.mirror":"51fd9c2dfaf5c9ce7ec648e63e4635cd","kubernetes.io/config.seen":"2023-09-14T22:53:26.037771296Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T22:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0914 22:54:07.943552 2909621 request.go:629] Waited for 197.328873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:07.943607 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-174950
	I0914 22:54:07.943612 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:07.943621 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:07.943628 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:07.946005 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:07.946026 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:07.946034 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:07.946041 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:07.946047 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:07 GMT
	I0914 22:54:07.946053 2909621 round_trippers.go:580]     Audit-Id: d5f767d4-db55-463d-ab58-28ede26e316c
	I0914 22:54:07.946059 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:07.946065 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:07.946173 2909621 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-14T22:53:22Z","fieldsType":"FieldsV1","fiel [truncated 6326 chars]
	I0914 22:54:07.946590 2909621 pod_ready.go:92] pod "kube-scheduler-multinode-174950" in "kube-system" namespace has status "Ready":"True"
	I0914 22:54:07.946602 2909621 pod_ready.go:81] duration metric: took 399.839207ms waiting for pod "kube-scheduler-multinode-174950" in "kube-system" namespace to be "Ready" ...
	I0914 22:54:07.946614 2909621 pod_ready.go:38] duration metric: took 1.200838425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 22:54:07.946627 2909621 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 22:54:07.946682 2909621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:07.959813 2909621 system_svc.go:56] duration metric: took 13.175721ms WaitForService to wait for kubelet.
	I0914 22:54:07.959837 2909621 kubeadm.go:581] duration metric: took 4.250541523s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 22:54:07.959856 2909621 node_conditions.go:102] verifying NodePressure condition ...
	I0914 22:54:08.143247 2909621 request.go:629] Waited for 183.31741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0914 22:54:08.143321 2909621 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0914 22:54:08.143326 2909621 round_trippers.go:469] Request Headers:
	I0914 22:54:08.143335 2909621 round_trippers.go:473]     Accept: application/json, */*
	I0914 22:54:08.143342 2909621 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 22:54:08.145859 2909621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 22:54:08.145884 2909621 round_trippers.go:577] Response Headers:
	I0914 22:54:08.145893 2909621 round_trippers.go:580]     Date: Thu, 14 Sep 2023 22:54:08 GMT
	I0914 22:54:08.145899 2909621 round_trippers.go:580]     Audit-Id: aa4eff68-6797-4324-8dfe-e35f00cf7a8d
	I0914 22:54:08.145928 2909621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 22:54:08.145944 2909621 round_trippers.go:580]     Content-Type: application/json
	I0914 22:54:08.145951 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8fb42a01-43cb-433e-8d27-928a15bf6182
	I0914 22:54:08.145961 2909621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 429f77d7-8740-4dbd-96b2-82135cf2c467
	I0914 22:54:08.146127 2909621 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"multinode-174950","uid":"561722f4-f6c5-4d8e-ade2-36bfe7c77a35","resourceVersion":"429","creationTimestamp":"2023-09-14T22:53:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-174950","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7492f2ae2d9b6e62b385ffcd97ebad62c645e82","minikube.k8s.io/name":"multinode-174950","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T22_53_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12629 chars]
	I0914 22:54:08.146805 2909621 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:54:08.146828 2909621 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:08.146839 2909621 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 22:54:08.146844 2909621 node_conditions.go:123] node cpu capacity is 2
	I0914 22:54:08.146849 2909621 node_conditions.go:105] duration metric: took 186.988366ms to run NodePressure ...
	I0914 22:54:08.146876 2909621 start.go:228] waiting for startup goroutines ...
	I0914 22:54:08.146907 2909621 start.go:242] writing updated cluster config ...
	I0914 22:54:08.147227 2909621 ssh_runner.go:195] Run: rm -f paused
	I0914 22:54:08.204249 2909621 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 22:54:08.206975 2909621 out.go:177] * Done! kubectl is now configured to use "multinode-174950" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 14 22:53:42 multinode-174950 crio[901]: time="2023-09-14 22:53:42.898338834Z" level=info msg="Starting container: 0cb8ea3616b25ecd51c4e26d0c61042a3c4507de81d96a41f1d30414a3c89074" id=b7f027c5-61a2-47a4-a314-6b71606e53b3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 22:53:42 multinode-174950 crio[901]: time="2023-09-14 22:53:42.911435566Z" level=info msg="Started container" PID=1949 containerID=0cb8ea3616b25ecd51c4e26d0c61042a3c4507de81d96a41f1d30414a3c89074 description=kube-system/storage-provisioner/storage-provisioner id=b7f027c5-61a2-47a4-a314-6b71606e53b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b169f3c37b6f638acece2d51bfc2da6fd5bc053cceed6a7ee19230aa36f5a639
	Sep 14 22:53:42 multinode-174950 crio[901]: time="2023-09-14 22:53:42.916554069Z" level=info msg="Created container dad48df507b46f2f4b53e25e96334def487872a4fdd4c7093edc683a6f6be657: kube-system/coredns-5dd5756b68-2xp7v/coredns" id=5ae7c835-20d5-44fa-9a30-94e766342e87 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 22:53:42 multinode-174950 crio[901]: time="2023-09-14 22:53:42.917177043Z" level=info msg="Starting container: dad48df507b46f2f4b53e25e96334def487872a4fdd4c7093edc683a6f6be657" id=8595db73-262a-4afc-bae9-f1236468ee81 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 22:53:42 multinode-174950 crio[901]: time="2023-09-14 22:53:42.930700350Z" level=info msg="Started container" PID=1968 containerID=dad48df507b46f2f4b53e25e96334def487872a4fdd4c7093edc683a6f6be657 description=kube-system/coredns-5dd5756b68-2xp7v/coredns id=8595db73-262a-4afc-bae9-f1236468ee81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ef2842afaeb95f015cd59e4bfc3d2fd6192e55841a62fd394f96980f0ae0ac0
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.395700021Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-grlb8/POD" id=4bd00398-bf24-4dfd-a5d1-06ba2cdef3be name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.395765457Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.412006285Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-grlb8 Namespace:default ID:f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914 UID:873459a7-4c86-4e2f-81ef-9adc7de708ec NetNS:/var/run/netns/ec19d06e-f962-481b-9775-4b2cee0ebed9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.412042954Z" level=info msg="Adding pod default_busybox-5bc68d56bd-grlb8 to CNI network \"kindnet\" (type=ptp)"
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.431516830Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-grlb8 Namespace:default ID:f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914 UID:873459a7-4c86-4e2f-81ef-9adc7de708ec NetNS:/var/run/netns/ec19d06e-f962-481b-9775-4b2cee0ebed9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.431658975Z" level=info msg="Checking pod default_busybox-5bc68d56bd-grlb8 for CNI network kindnet (type=ptp)"
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.436026735Z" level=info msg="Ran pod sandbox f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914 with infra container: default/busybox-5bc68d56bd-grlb8/POD" id=4bd00398-bf24-4dfd-a5d1-06ba2cdef3be name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.437022632Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=47aaf2e1-dad1-4c11-b851-9bf61e24efa8 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.437229294Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=47aaf2e1-dad1-4c11-b851-9bf61e24efa8 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.437836194Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=9fef3888-062f-4bf7-b5e7-2f61d73d8aa3 name=/runtime.v1.ImageService/PullImage
	Sep 14 22:54:09 multinode-174950 crio[901]: time="2023-09-14 22:54:09.439011472Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 14 22:54:10 multinode-174950 crio[901]: time="2023-09-14 22:54:10.124209861Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.365719253Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=9fef3888-062f-4bf7-b5e7-2f61d73d8aa3 name=/runtime.v1.ImageService/PullImage
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.366982128Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d8211a72-dd8d-4f72-87bc-307342cdf656 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.367625574Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d8211a72-dd8d-4f72-87bc-307342cdf656 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.368400696Z" level=info msg="Creating container: default/busybox-5bc68d56bd-grlb8/busybox" id=f84b272d-6b9f-47c7-b06b-e1ad21935f21 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.368490222Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.478045680Z" level=info msg="Created container fa4df50a3464cdd5a9707aa8604dd101bb0164a8b40c4e8ae9140118fc047b1b: default/busybox-5bc68d56bd-grlb8/busybox" id=f84b272d-6b9f-47c7-b06b-e1ad21935f21 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.479367550Z" level=info msg="Starting container: fa4df50a3464cdd5a9707aa8604dd101bb0164a8b40c4e8ae9140118fc047b1b" id=8f27691f-7530-4115-8f18-03bc8044e672 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 22:54:11 multinode-174950 crio[901]: time="2023-09-14 22:54:11.489382715Z" level=info msg="Started container" PID=2107 containerID=fa4df50a3464cdd5a9707aa8604dd101bb0164a8b40c4e8ae9140118fc047b1b description=default/busybox-5bc68d56bd-grlb8/busybox id=8f27691f-7530-4115-8f18-03bc8044e672 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa4df50a3464c       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago       Running             busybox                   0                   f28021dcf4aeb       busybox-5bc68d56bd-grlb8
	dad48df507b46       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      34 seconds ago      Running             coredns                   0                   3ef2842afaeb9       coredns-5dd5756b68-2xp7v
	0cb8ea3616b25       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      34 seconds ago      Running             storage-provisioner       0                   b169f3c37b6f6       storage-provisioner
	384782373e5af       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052    35 seconds ago      Running             kindnet-cni               0                   09a7bd2aaabab       kindnet-x8mln
	8f968b3a535a1       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                      37 seconds ago      Running             kube-proxy                0                   751a3cd10e382       kube-proxy-hfqpz
	c1d6009c9713e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      58 seconds ago      Running             etcd                      0                   8dca2ee7a7cc2       etcd-multinode-174950
	d51b38cf29736       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                      58 seconds ago      Running             kube-scheduler            0                   c5525778706f7       kube-scheduler-multinode-174950
	4efd448e37f75       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                      58 seconds ago      Running             kube-apiserver            0                   47a0876bec3a3       kube-apiserver-multinode-174950
	95f66a6e8a895       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                      58 seconds ago      Running             kube-controller-manager   0                   0b423ed88c1ec       kube-controller-manager-multinode-174950
	
	* 
	* ==> coredns [dad48df507b46f2f4b53e25e96334def487872a4fdd4c7093edc683a6f6be657] <==
	* [INFO] 10.244.1.2:58147 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111368s
	[INFO] 10.244.0.3:54326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112369s
	[INFO] 10.244.0.3:52299 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001189858s
	[INFO] 10.244.0.3:43581 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070015s
	[INFO] 10.244.0.3:45995 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044078s
	[INFO] 10.244.0.3:45587 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000868766s
	[INFO] 10.244.0.3:54368 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060315s
	[INFO] 10.244.0.3:46604 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045448s
	[INFO] 10.244.0.3:43723 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050937s
	[INFO] 10.244.1.2:53341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092529s
	[INFO] 10.244.1.2:60155 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071574s
	[INFO] 10.244.1.2:36345 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	[INFO] 10.244.1.2:47339 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068422s
	[INFO] 10.244.0.3:54274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095253s
	[INFO] 10.244.0.3:39293 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062276s
	[INFO] 10.244.0.3:49925 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070564s
	[INFO] 10.244.0.3:47823 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055738s
	[INFO] 10.244.1.2:44939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132956s
	[INFO] 10.244.1.2:60988 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152583s
	[INFO] 10.244.1.2:35304 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104763s
	[INFO] 10.244.1.2:34209 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001349s
	[INFO] 10.244.0.3:36695 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145526s
	[INFO] 10.244.0.3:49548 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057509s
	[INFO] 10.244.0.3:46247 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078138s
	[INFO] 10.244.0.3:59811 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067536s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-174950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-174950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=multinode-174950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T22_53_27_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:53:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-174950
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:54:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:53:57 +0000   Thu, 14 Sep 2023 22:53:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:53:57 +0000   Thu, 14 Sep 2023 22:53:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:53:57 +0000   Thu, 14 Sep 2023 22:53:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:53:57 +0000   Thu, 14 Sep 2023 22:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-174950
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bc70dd0d2a144c7884db0f8aba08664
	  System UUID:                d4300e59-5e0e-42ec-b70b-5ead2b6742d5
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-grlb8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-2xp7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     39s
	  kube-system                 etcd-multinode-174950                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-x8mln                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      39s
	  kube-system                 kube-apiserver-multinode-174950             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-controller-manager-multinode-174950    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-proxy-hfqpz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-multinode-174950             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 37s   kube-proxy       
	  Normal  Starting                 51s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s   kubelet          Node multinode-174950 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s   kubelet          Node multinode-174950 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s   kubelet          Node multinode-174950 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s   node-controller  Node multinode-174950 event: Registered Node multinode-174950 in Controller
	  Normal  NodeReady                35s   kubelet          Node multinode-174950 status is now: NodeReady
	
	
	Name:               multinode-174950-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-174950-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 22:54:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-174950-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 22:54:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 22:54:06 +0000   Thu, 14 Sep 2023 22:54:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 22:54:06 +0000   Thu, 14 Sep 2023 22:54:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 22:54:06 +0000   Thu, 14 Sep 2023 22:54:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 22:54:06 +0000   Thu, 14 Sep 2023 22:54:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-174950-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fa6e6b8a3704ec69902df02baae2fb4
	  System UUID:                6662b945-9bc8-4d17-9416-fba5d1c9b5e7
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fkf4t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-44lgb               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15s
	  kube-system                 kube-proxy-bnzw9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  RegisteredNode           15s                node-controller  Node multinode-174950-m02 event: Registered Node multinode-174950-m02 in Controller
	  Normal  NodeHasSufficientMemory  15s (x5 over 16s)  kubelet          Node multinode-174950-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x5 over 16s)  kubelet          Node multinode-174950-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x5 over 16s)  kubelet          Node multinode-174950-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11s                kubelet          Node multinode-174950-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001074] FS-Cache: O-key=[8] '85703b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000899] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001017] FS-Cache: N-key=[8] '85703b0000000000'
	[  +2.012590] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000000e476c3
	[  +0.001056] FS-Cache: O-key=[8] '84703b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.001024] FS-Cache: N-key=[8] '84703b0000000000'
	[  +0.406786] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000007a274cdd
	[  +0.001021] FS-Cache: O-key=[8] '8a703b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000038968ff8
	[  +0.001006] FS-Cache: N-key=[8] '8a703b0000000000'
	[  +4.128718] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=000000ea [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000fe6607cc{9P.session} n=000000001f02128f
	[  +0.001183] FS-Cache: O-key=[10] '34333134393838363731'
	[  +0.000776] FS-Cache: N-cookie c=000000eb [p=00000002 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=00000000fe6607cc{9P.session} n=00000000648dde5c
	[  +0.001093] FS-Cache: N-key=[10] '34333134393838363731'
	
	* 
	* ==> etcd [c1d6009c9713e04281b18d489191e5d5ab5870df45187233254dac0dc82bd725] <==
	* {"level":"info","ts":"2023-09-14T22:53:18.599874Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T22:53:18.60008Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T22:53:18.600144Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T22:53:18.600215Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-14T22:53:18.600253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-14T22:53:18.600945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-14T22:53:18.601087Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-14T22:53:19.551492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-14T22:53:19.551598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-14T22:53:19.551638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-14T22:53:19.551677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-14T22:53:19.55171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-14T22:53:19.551749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T22:53:19.551785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-14T22:53:19.555323Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:53:19.556527Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-174950 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T22:53:19.560551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:53:19.560627Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T22:53:19.561956Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T22:53:19.568531Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T22:53:19.568587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T22:53:19.568913Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:53:19.568947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-09-14T22:53:19.569022Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T22:53:19.569076Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  22:54:17 up 22:36,  0 users,  load average: 1.11, 1.48, 1.39
	Linux multinode-174950 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [384782373e5af8e6da9422fb2a3dbb02ec3d950ef02e515e5a34f8df9f48dc1c] <==
	* I0914 22:53:41.476823       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 22:53:41.476884       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0914 22:53:41.477016       1 main.go:116] setting mtu 1500 for CNI 
	I0914 22:53:41.477027       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 22:53:41.477037       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 22:53:41.873765       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0914 22:53:41.873869       1 main.go:227] handling current node
	I0914 22:53:51.892096       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0914 22:53:51.892127       1 main.go:227] handling current node
	I0914 22:54:01.909003       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0914 22:54:01.909031       1 main.go:227] handling current node
	I0914 22:54:11.913335       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0914 22:54:11.913365       1 main.go:227] handling current node
	I0914 22:54:11.913375       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0914 22:54:11.913381       1 main.go:250] Node multinode-174950-m02 has CIDR [10.244.1.0/24] 
	I0914 22:54:11.913532       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [4efd448e37f7562db5976c76bdebed004efa31c9af3f6e831b23abd27977d285] <==
	* I0914 22:53:22.976017       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 22:53:22.976435       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 22:53:22.991484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 22:53:22.991948       1 aggregator.go:166] initial CRD sync complete...
	I0914 22:53:22.991966       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 22:53:22.991973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 22:53:22.991980       1 cache.go:39] Caches are synced for autoregister controller
	I0914 22:53:23.023740       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 22:53:23.682989       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0914 22:53:23.686714       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0914 22:53:23.686737       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 22:53:24.201030       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 22:53:24.237399       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 22:53:24.281444       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0914 22:53:24.288823       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0914 22:53:24.290753       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 22:53:24.295041       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 22:53:24.973858       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 22:53:25.972951       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 22:53:25.989531       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 22:53:26.000299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 22:53:38.034170       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0914 22:53:38.733332       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0914 22:54:13.994503       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:50072: write: broken pipe
	E0914 22:54:14.468856       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:50100: write: broken pipe
	
	* 
	* ==> kube-controller-manager [95f66a6e8a895a02d9b3eef611e6225fa33801724ec3bfd0d37f8ef2fa7886a4] <==
	* I0914 22:53:39.405265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.376µs"
	I0914 22:53:42.474291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.35µs"
	I0914 22:53:42.496891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.606µs"
	I0914 22:53:42.776775       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0914 22:53:43.254198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.915µs"
	I0914 22:53:43.290664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.535017ms"
	I0914 22:53:43.290832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.377µs"
	I0914 22:54:02.741551       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-174950-m02\" does not exist"
	I0914 22:54:02.748970       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-174950-m02" podCIDRs=["10.244.1.0/24"]
	I0914 22:54:02.758679       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-44lgb"
	I0914 22:54:02.760058       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bnzw9"
	I0914 22:54:02.778325       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-174950-m02"
	I0914 22:54:02.778763       1 event.go:307] "Event occurred" object="multinode-174950-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-174950-m02 event: Registered Node multinode-174950-m02 in Controller"
	I0914 22:54:06.355316       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-174950-m02"
	I0914 22:54:09.042085       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0914 22:54:09.060534       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fkf4t"
	I0914 22:54:09.078571       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-grlb8"
	I0914 22:54:09.104123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.937534ms"
	I0914 22:54:09.116778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.607698ms"
	I0914 22:54:09.117721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.423µs"
	I0914 22:54:09.124991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.93µs"
	I0914 22:54:12.313137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.171706ms"
	I0914 22:54:12.313406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.576µs"
	I0914 22:54:12.468200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.048899ms"
	I0914 22:54:12.468423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.612µs"
	
	* 
	* ==> kube-proxy [8f968b3a535a1df1d720bc891c5eb16c500ad5740fc558b134b171258a783a16] <==
	* I0914 22:53:39.718221       1 server_others.go:69] "Using iptables proxy"
	I0914 22:53:39.792878       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0914 22:53:39.902976       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 22:53:39.909351       1 server_others.go:152] "Using iptables Proxier"
	I0914 22:53:39.909395       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 22:53:39.909404       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 22:53:39.909480       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 22:53:39.909696       1 server.go:846] "Version info" version="v1.28.1"
	I0914 22:53:39.909712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 22:53:39.913401       1 config.go:188] "Starting service config controller"
	I0914 22:53:39.913437       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 22:53:39.913455       1 config.go:97] "Starting endpoint slice config controller"
	I0914 22:53:39.913464       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 22:53:39.923437       1 config.go:315] "Starting node config controller"
	I0914 22:53:39.923457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 22:53:40.018833       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 22:53:40.018932       1 shared_informer.go:318] Caches are synced for service config
	I0914 22:53:40.024476       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d51b38cf29736dcbef97647666cb8400066c6309f86bb65266d0efe7d266234f] <==
	* W0914 22:53:22.951417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:53:22.952381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 22:53:22.951450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 22:53:22.952480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 22:53:22.951515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 22:53:22.951548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 22:53:22.951581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 22:53:22.951307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 22:53:22.952654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 22:53:22.952699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:53:22.952805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 22:53:22.952814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 22:53:22.968913       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 22:53:22.969017       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 22:53:23.790467       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 22:53:23.790588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 22:53:23.798554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 22:53:23.798646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 22:53:23.805703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 22:53:23.805813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 22:53:23.980764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 22:53:23.980803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 22:53:24.008641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 22:53:24.008678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0914 22:53:24.500637       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.764339    1401 topology_manager.go:215] "Topology Admit Handler" podUID="b0b0e2b5-0d63-45d9-95e4-6a75fc24e367" podNamespace="kube-system" podName="kindnet-x8mln"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861419    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b-lib-modules\") pod \"kube-proxy-hfqpz\" (UID: \"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b\") " pod="kube-system/kube-proxy-hfqpz"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861475    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b0b0e2b5-0d63-45d9-95e4-6a75fc24e367-cni-cfg\") pod \"kindnet-x8mln\" (UID: \"b0b0e2b5-0d63-45d9-95e4-6a75fc24e367\") " pod="kube-system/kindnet-x8mln"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861501    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b0e2b5-0d63-45d9-95e4-6a75fc24e367-xtables-lock\") pod \"kindnet-x8mln\" (UID: \"b0b0e2b5-0d63-45d9-95e4-6a75fc24e367\") " pod="kube-system/kindnet-x8mln"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861527    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b-xtables-lock\") pod \"kube-proxy-hfqpz\" (UID: \"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b\") " pod="kube-system/kube-proxy-hfqpz"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861550    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b-kube-proxy\") pod \"kube-proxy-hfqpz\" (UID: \"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b\") " pod="kube-system/kube-proxy-hfqpz"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861577    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjw7z\" (UniqueName: \"kubernetes.io/projected/44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b-kube-api-access-cjw7z\") pod \"kube-proxy-hfqpz\" (UID: \"44f7ab98-fab3-4a63-ad84-ccb6e71f9c3b\") " pod="kube-system/kube-proxy-hfqpz"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861602    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b0e2b5-0d63-45d9-95e4-6a75fc24e367-lib-modules\") pod \"kindnet-x8mln\" (UID: \"b0b0e2b5-0d63-45d9-95e4-6a75fc24e367\") " pod="kube-system/kindnet-x8mln"
	Sep 14 22:53:38 multinode-174950 kubelet[1401]: I0914 22:53:38.861630    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmsn\" (UniqueName: \"kubernetes.io/projected/b0b0e2b5-0d63-45d9-95e4-6a75fc24e367-kube-api-access-ksmsn\") pod \"kindnet-x8mln\" (UID: \"b0b0e2b5-0d63-45d9-95e4-6a75fc24e367\") " pod="kube-system/kindnet-x8mln"
	Sep 14 22:53:39 multinode-174950 kubelet[1401]: W0914 22:53:39.140416    1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/crio-09a7bd2aaabab0e63fb0b8945684ba1d6c9b6c1e798249252a7325c22c423c75 WatchSource:0}: Error finding container 09a7bd2aaabab0e63fb0b8945684ba1d6c9b6c1e798249252a7325c22c423c75: Status 404 returned error can't find the container with id 09a7bd2aaabab0e63fb0b8945684ba1d6c9b6c1e798249252a7325c22c423c75
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.242550    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hfqpz" podStartSLOduration=4.242506625 podCreationTimestamp="2023-09-14 22:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:53:40.245754703 +0000 UTC m=+14.299316624" watchObservedRunningTime="2023-09-14 22:53:42.242506625 +0000 UTC m=+16.296068554"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.441746    1401 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.468847    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-x8mln" podStartSLOduration=2.278802684 podCreationTimestamp="2023-09-14 22:53:38 +0000 UTC" firstStartedPulling="2023-09-14 22:53:39.149786965 +0000 UTC m=+13.203348886" lastFinishedPulling="2023-09-14 22:53:41.339786589 +0000 UTC m=+15.393348510" observedRunningTime="2023-09-14 22:53:42.242794699 +0000 UTC m=+16.296356628" watchObservedRunningTime="2023-09-14 22:53:42.468802308 +0000 UTC m=+16.522364237"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.469095    1401 topology_manager.go:215] "Topology Admit Handler" podUID="6fd7dc96-c3be-4061-9503-3553207816e2" podNamespace="kube-system" podName="storage-provisioner"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.472685    1401 topology_manager.go:215] "Topology Admit Handler" podUID="76ecbab3-e96d-4c2e-be1e-21bed9f04965" podNamespace="kube-system" podName="coredns-5dd5756b68-2xp7v"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.489556    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76ecbab3-e96d-4c2e-be1e-21bed9f04965-config-volume\") pod \"coredns-5dd5756b68-2xp7v\" (UID: \"76ecbab3-e96d-4c2e-be1e-21bed9f04965\") " pod="kube-system/coredns-5dd5756b68-2xp7v"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.489615    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6fd7dc96-c3be-4061-9503-3553207816e2-tmp\") pod \"storage-provisioner\" (UID: \"6fd7dc96-c3be-4061-9503-3553207816e2\") " pod="kube-system/storage-provisioner"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.489641    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227hw\" (UniqueName: \"kubernetes.io/projected/76ecbab3-e96d-4c2e-be1e-21bed9f04965-kube-api-access-227hw\") pod \"coredns-5dd5756b68-2xp7v\" (UID: \"76ecbab3-e96d-4c2e-be1e-21bed9f04965\") " pod="kube-system/coredns-5dd5756b68-2xp7v"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: I0914 22:53:42.489671    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dzl\" (UniqueName: \"kubernetes.io/projected/6fd7dc96-c3be-4061-9503-3553207816e2-kube-api-access-b2dzl\") pod \"storage-provisioner\" (UID: \"6fd7dc96-c3be-4061-9503-3553207816e2\") " pod="kube-system/storage-provisioner"
	Sep 14 22:53:42 multinode-174950 kubelet[1401]: W0914 22:53:42.828105    1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/crio-3ef2842afaeb95f015cd59e4bfc3d2fd6192e55841a62fd394f96980f0ae0ac0 WatchSource:0}: Error finding container 3ef2842afaeb95f015cd59e4bfc3d2fd6192e55841a62fd394f96980f0ae0ac0: Status 404 returned error can't find the container with id 3ef2842afaeb95f015cd59e4bfc3d2fd6192e55841a62fd394f96980f0ae0ac0
	Sep 14 22:53:43 multinode-174950 kubelet[1401]: I0914 22:53:43.252259    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2xp7v" podStartSLOduration=5.25220937 podCreationTimestamp="2023-09-14 22:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:53:43.251624188 +0000 UTC m=+17.305186126" watchObservedRunningTime="2023-09-14 22:53:43.25220937 +0000 UTC m=+17.305771290"
	Sep 14 22:53:43 multinode-174950 kubelet[1401]: I0914 22:53:43.281594    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.281551819 podCreationTimestamp="2023-09-14 22:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-14 22:53:43.265697805 +0000 UTC m=+17.319259742" watchObservedRunningTime="2023-09-14 22:53:43.281551819 +0000 UTC m=+17.335113739"
	Sep 14 22:54:09 multinode-174950 kubelet[1401]: I0914 22:54:09.093890    1401 topology_manager.go:215] "Topology Admit Handler" podUID="873459a7-4c86-4e2f-81ef-9adc7de708ec" podNamespace="default" podName="busybox-5bc68d56bd-grlb8"
	Sep 14 22:54:09 multinode-174950 kubelet[1401]: I0914 22:54:09.160904    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rsl\" (UniqueName: \"kubernetes.io/projected/873459a7-4c86-4e2f-81ef-9adc7de708ec-kube-api-access-h8rsl\") pod \"busybox-5bc68d56bd-grlb8\" (UID: \"873459a7-4c86-4e2f-81ef-9adc7de708ec\") " pod="default/busybox-5bc68d56bd-grlb8"
	Sep 14 22:54:09 multinode-174950 kubelet[1401]: W0914 22:54:09.435691    1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/crio-f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914 WatchSource:0}: Error finding container f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914: Status 404 returned error can't find the container with id f28021dcf4aeb204bd95c62daafe706b7fa5cf34b7e9d68913b45f8eed0ae914
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-174950 -n multinode-174950
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-174950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1854956045.exe start -p running-upgrade-629800 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0914 23:09:49.832438 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1854956045.exe start -p running-upgrade-629800 --memory=2200 --vm-driver=docker  --container-runtime=crio: (59.066607343s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-629800 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-629800 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.410408379s)

                                                
                                                
-- stdout --
	* [running-upgrade-629800] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-629800 in cluster running-upgrade-629800
	* Pulling base image ...
	* Updating the running docker "running-upgrade-629800" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:10:02.959795 2969608 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:10:02.960009 2969608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:10:02.960032 2969608 out.go:309] Setting ErrFile to fd 2...
	I0914 23:10:02.960063 2969608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:10:02.960378 2969608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:10:02.960925 2969608 out.go:303] Setting JSON to false
	I0914 23:10:02.965246 2969608 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82348,"bootTime":1694650655,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:10:02.965319 2969608 start.go:138] virtualization:  
	I0914 23:10:02.969856 2969608 out.go:177] * [running-upgrade-629800] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:10:02.971771 2969608 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:10:02.974070 2969608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:10:02.971890 2969608 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0914 23:10:02.971968 2969608 notify.go:220] Checking for updates...
	I0914 23:10:02.978066 2969608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:10:02.980420 2969608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:10:02.982814 2969608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:10:02.984731 2969608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:10:02.987604 2969608 config.go:182] Loaded profile config "running-upgrade-629800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:10:02.990387 2969608 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 23:10:02.992119 2969608 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:10:03.019367 2969608 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:10:03.019474 2969608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:10:03.180386 2969608 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-14 23:10:03.162098249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:10:03.180489 2969608 docker.go:294] overlay module found
	I0914 23:10:03.184394 2969608 out.go:177] * Using the docker driver based on existing profile
	I0914 23:10:03.186270 2969608 start.go:298] selected driver: docker
	I0914 23:10:03.186283 2969608 start.go:902] validating driver "docker" against &{Name:running-upgrade-629800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-629800 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.41 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:10:03.186390 2969608 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:10:03.187013 2969608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:10:03.364306 2969608 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-14 23:10:03.353414059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:10:03.364926 2969608 cni.go:84] Creating CNI manager for ""
	I0914 23:10:03.364995 2969608 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:10:03.365013 2969608 start_flags.go:321] config:
	{Name:running-upgrade-629800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-629800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.41 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:10:03.368633 2969608 out.go:177] * Starting control plane node running-upgrade-629800 in cluster running-upgrade-629800
	I0914 23:10:03.370592 2969608 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:10:03.372607 2969608 out.go:177] * Pulling base image ...
	I0914 23:10:03.374637 2969608 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0914 23:10:03.374918 2969608 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0914 23:10:03.381186 2969608 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0914 23:10:03.409117 2969608 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0914 23:10:03.409138 2969608 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0914 23:10:03.444826 2969608 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0914 23:10:03.444994 2969608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/running-upgrade-629800/config.json ...
	I0914 23:10:03.445085 2969608 cache.go:107] acquiring lock: {Name:mkfb7a01b2c28b895311d739176129f21ced99a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445176 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:10:03.445185 2969608 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.103µs
	I0914 23:10:03.445212 2969608 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:10:03.445221 2969608 cache.go:107] acquiring lock: {Name:mk76032a19b68713b35124ddc897f19026a7438e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445249 2969608 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:10:03.445259 2969608 cache.go:107] acquiring lock: {Name:mkf71df78f864c319bca733d50832feee5564baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445293 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0914 23:10:03.445299 2969608 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 40.927µs
	I0914 23:10:03.445253 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0914 23:10:03.445307 2969608 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0914 23:10:03.445293 2969608 start.go:365] acquiring machines lock for running-upgrade-629800: {Name:mk24c923c76f534105527374d425572c09a8d623 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445316 2969608 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 94.81µs
	I0914 23:10:03.445326 2969608 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0914 23:10:03.445328 2969608 start.go:369] acquired machines lock for "running-upgrade-629800" in 12.102µs
	I0914 23:10:03.445341 2969608 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:10:03.445349 2969608 fix.go:54] fixHost starting: 
	I0914 23:10:03.445343 2969608 cache.go:107] acquiring lock: {Name:mkb7d0b3abfb357f1e3ae0d11b10b748d2917268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445389 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0914 23:10:03.445395 2969608 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 53.432µs
	I0914 23:10:03.445402 2969608 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0914 23:10:03.445412 2969608 cache.go:107] acquiring lock: {Name:mk944319bce43d0fa25bd88cf1d736630e27c361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445439 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0914 23:10:03.445445 2969608 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 34.904µs
	I0914 23:10:03.445452 2969608 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0914 23:10:03.445461 2969608 cache.go:107] acquiring lock: {Name:mk502a05df334e4a26286e901955bc5afc10de99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445485 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0914 23:10:03.445490 2969608 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 30.269µs
	I0914 23:10:03.445496 2969608 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0914 23:10:03.445506 2969608 cache.go:107] acquiring lock: {Name:mk1214f169a1ad7693ec44c2ba0a0424a92b8633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445532 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0914 23:10:03.445536 2969608 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 31.105µs
	I0914 23:10:03.445542 2969608 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0914 23:10:03.445553 2969608 cache.go:107] acquiring lock: {Name:mka620cea4c285c172f61f5d6134ade35fc05543 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:10:03.445576 2969608 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0914 23:10:03.445580 2969608 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.866µs
	I0914 23:10:03.445598 2969608 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0914 23:10:03.445605 2969608 cache.go:87] Successfully saved all images to host disk.
	I0914 23:10:03.445641 2969608 cli_runner.go:164] Run: docker container inspect running-upgrade-629800 --format={{.State.Status}}
	I0914 23:10:03.468641 2969608 fix.go:102] recreateIfNeeded on running-upgrade-629800: state=Running err=<nil>
	W0914 23:10:03.468678 2969608 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:10:03.471027 2969608 out.go:177] * Updating the running docker "running-upgrade-629800" container ...
	I0914 23:10:03.473033 2969608 machine.go:88] provisioning docker machine ...
	I0914 23:10:03.473076 2969608 ubuntu.go:169] provisioning hostname "running-upgrade-629800"
	I0914 23:10:03.473168 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:03.491010 2969608 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:03.491552 2969608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36574 <nil> <nil>}
	I0914 23:10:03.491573 2969608 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-629800 && echo "running-upgrade-629800" | sudo tee /etc/hostname
	I0914 23:10:03.645874 2969608 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-629800
	
	I0914 23:10:03.645956 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:03.669417 2969608 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:03.669862 2969608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36574 <nil> <nil>}
	I0914 23:10:03.669890 2969608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-629800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-629800/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-629800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:10:03.810139 2969608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:10:03.810161 2969608 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:10:03.810207 2969608 ubuntu.go:177] setting up certificates
	I0914 23:10:03.810216 2969608 provision.go:83] configureAuth start
	I0914 23:10:03.810291 2969608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-629800
	I0914 23:10:03.831139 2969608 provision.go:138] copyHostCerts
	I0914 23:10:03.831196 2969608 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:10:03.831204 2969608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:10:03.831279 2969608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:10:03.831397 2969608 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:10:03.831403 2969608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:10:03.831432 2969608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:10:03.831484 2969608 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:10:03.831488 2969608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:10:03.831511 2969608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:10:03.831558 2969608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-629800 san=[192.168.70.41 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-629800]
	I0914 23:10:03.983303 2969608 provision.go:172] copyRemoteCerts
	I0914 23:10:03.983374 2969608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:10:03.983420 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:04.002174 2969608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36574 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/running-upgrade-629800/id_rsa Username:docker}
	I0914 23:10:04.102373 2969608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:10:04.127228 2969608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:10:04.150858 2969608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 23:10:04.174221 2969608 provision.go:86] duration metric: configureAuth took 363.990876ms
	I0914 23:10:04.174256 2969608 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:10:04.174459 2969608 config.go:182] Loaded profile config "running-upgrade-629800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:10:04.174580 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:04.193218 2969608 main.go:141] libmachine: Using SSH client type: native
	I0914 23:10:04.193628 2969608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36574 <nil> <nil>}
	I0914 23:10:04.193649 2969608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:10:04.741653 2969608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:10:04.741675 2969608 machine.go:91] provisioned docker machine in 1.268625088s
	I0914 23:10:04.741686 2969608 start.go:300] post-start starting for "running-upgrade-629800" (driver="docker")
	I0914 23:10:04.741697 2969608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:10:04.741762 2969608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:10:04.741815 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:04.760814 2969608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36574 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/running-upgrade-629800/id_rsa Username:docker}
	I0914 23:10:04.861412 2969608 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:10:04.865188 2969608 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:10:04.865210 2969608 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:10:04.865222 2969608 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:10:04.865228 2969608 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0914 23:10:04.865238 2969608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:10:04.865297 2969608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:10:04.865394 2969608 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:10:04.865493 2969608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:10:04.873914 2969608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:10:04.896971 2969608 start.go:303] post-start completed in 155.268168ms
	I0914 23:10:04.897056 2969608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:10:04.897101 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:04.916409 2969608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36574 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/running-upgrade-629800/id_rsa Username:docker}
	I0914 23:10:05.013063 2969608 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:10:05.019679 2969608 fix.go:56] fixHost completed within 1.574323125s
	I0914 23:10:05.019703 2969608 start.go:83] releasing machines lock for "running-upgrade-629800", held for 1.574367031s
	I0914 23:10:05.019790 2969608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-629800
	I0914 23:10:05.038780 2969608 ssh_runner.go:195] Run: cat /version.json
	I0914 23:10:05.038838 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:05.039082 2969608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:10:05.039169 2969608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-629800
	I0914 23:10:05.063319 2969608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36574 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/running-upgrade-629800/id_rsa Username:docker}
	I0914 23:10:05.071367 2969608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36574 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/running-upgrade-629800/id_rsa Username:docker}
	W0914 23:10:05.161318 2969608 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:10:05.161432 2969608 ssh_runner.go:195] Run: systemctl --version
	I0914 23:10:05.324962 2969608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:10:05.486774 2969608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 23:10:05.492606 2969608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:10:05.520274 2969608 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 23:10:05.520361 2969608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:10:05.544610 2969608 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:10:05.544634 2969608 start.go:469] detecting cgroup driver to use...
	I0914 23:10:05.544664 2969608 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 23:10:05.544713 2969608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:10:05.584195 2969608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:10:05.597186 2969608 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:10:05.597294 2969608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:10:05.610063 2969608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:10:05.622872 2969608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0914 23:10:05.636312 2969608 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0914 23:10:05.636430 2969608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:10:05.776407 2969608 docker.go:212] disabling docker service ...
	I0914 23:10:05.776554 2969608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:10:05.792371 2969608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:10:05.811151 2969608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:10:06.040363 2969608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:10:06.221072 2969608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:10:06.241217 2969608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:10:06.260799 2969608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 23:10:06.260864 2969608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:10:06.275795 2969608 out.go:177] 
	W0914 23:10:06.277690 2969608 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0914 23:10:06.277712 2969608 out.go:239] * 
	* 
	W0914 23:10:06.278604 2969608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:10:06.280948 2969608 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-629800 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-14 23:10:06.320431944 +0000 UTC m=+2605.914787207
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-629800
helpers_test.go:235: (dbg) docker inspect running-upgrade-629800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb",
	        "Created": "2023-09-14T23:09:18.332713105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2966194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T23:09:18.848001458Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb/hosts",
	        "LogPath": "/var/lib/docker/containers/d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb/d4c94e394a2dc7146f0d40210e0dd4874d67d10a0170f057d46655e289bbaedb-json.log",
	        "Name": "/running-upgrade-629800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-629800:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-629800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f371fa2460b6e35ae655deb62a379bb1c8a6d74a341cfcc32995adb1433ff2a1-init/diff:/var/lib/docker/overlay2/7c5da317103d95204ba0214f0ef1a12508e64901f3157e24af3061ffc1e4ab2c/diff:/var/lib/docker/overlay2/5b98c6906039704573521f04becd106cb41206690477bc867ea3640fe19e5f73/diff:/var/lib/docker/overlay2/fdbfa6ccd992f0a761a93d77fffe2579992788f3e9a42802da17d79942a55194/diff:/var/lib/docker/overlay2/290866a47da2b40e026d6e51e7d3d71c9f3cf2712e8e918186279402819a5925/diff:/var/lib/docker/overlay2/010e671dd7277ab4d6fcebea1e8ae45df32172f4d78b7eea01522879f30197e6/diff:/var/lib/docker/overlay2/8af3150eee58189e4e1d576013909d62ce383e0aa75c241d6d8b0503f1fb4e4c/diff:/var/lib/docker/overlay2/55b87605c19756005b2e142b6bdc1265e7896535dbf570b2ffaddba63fb9da79/diff:/var/lib/docker/overlay2/141869b4171db4cedae050e1d0fbc222650fc0a7e3d212d05165a963b3cfb63f/diff:/var/lib/docker/overlay2/1516efaba3656ff816f42db5d6a3c7f4b77229eeda6c88b519d45adfcfcc6632/diff:/var/lib/docker/overlay2/92b945
493089feb64934863631b4beb3d9ce01a378ac5c2407b66b04ea2447a3/diff:/var/lib/docker/overlay2/ac2c36ef4335422d624e243d8cf3e24d6c2f4def64122c3eb512f60a1511e93c/diff:/var/lib/docker/overlay2/ed674f6894c13f612fee33912fda53361d1e96c0cc7c2317d1d7e622631c797d/diff:/var/lib/docker/overlay2/fb1c936cf47fe32bea5c11e8a886bb34dbb54dbeb7cd1310b26e64054cbd168d/diff:/var/lib/docker/overlay2/bfeebc6bdcc2eb29846897e9745903c9991a97ae36e8aaf5dcc32d49c7339d4f/diff:/var/lib/docker/overlay2/c7725dc869d0db4f1c49ed5aec2aa1191926cca97adc897c2b7986c684ad2b3b/diff:/var/lib/docker/overlay2/72450eeb6564f18ff89d9e468cbc6c23b0aee367aaed3c264b20e46d1b6680ef/diff:/var/lib/docker/overlay2/b15973f157b59c3fb8bdf5b3e5dc0fb5c70a5e9020a3390e398bbbdb9c7626b3/diff:/var/lib/docker/overlay2/eb884593b025987e6bf0bfe1d4e8e5c6a630a9b8684c12476fbc7b9de54c7caa/diff:/var/lib/docker/overlay2/398506c118ae2f43c1c7e6d8a9e4068f5e29459732f0962d002f5253ac7f012e/diff:/var/lib/docker/overlay2/10ff2d64c17d34a0a16ba90719c4553fe5d1f1fce0007baa02150d416d373e4e/diff:/var/lib/d
ocker/overlay2/30dd13feffdf6d16bd1f976f7c6e9bf7067ea11ab7dd113d7324729a8c072126/diff:/var/lib/docker/overlay2/95b33caad65119fa23181fbc14e0333335aea3ab6010c3fbf867bdbdd932aaea/diff:/var/lib/docker/overlay2/2333ca70477bffbdf6db697076b3fbd311a41daeeab4b8aca70fe7d6140605a5/diff:/var/lib/docker/overlay2/f878ccf3e37cb9f9dd3dc428e304724363e474adb406b69b0ed570118bbf8411/diff:/var/lib/docker/overlay2/ba3ebfd4698573b0180c0d5dbe6b3e271de61e9ce939f6282489f611f93dccab/diff:/var/lib/docker/overlay2/3a2dc505be783b0d76de275a15692ece062997531a9e6d3d9cca9203bd6c54e0/diff:/var/lib/docker/overlay2/d89a051735d353577c1783a5bd354e3587cb3bc6a03edc3e57105af2f5646097/diff:/var/lib/docker/overlay2/a4b10c936ee226011173b47efe4ed6b80aa1c9aa58085214e8cef13b8182c584/diff:/var/lib/docker/overlay2/3087e720eca7180c891833d9e2d728303c69cf63d87670543e067c413559f17d/diff:/var/lib/docker/overlay2/26db0195c0003b439823635c3dd99f31398e8656fd6939d416472fa1f7da01fe/diff:/var/lib/docker/overlay2/9f5e8bb5a94074569bf65f0d1beb8167c1b5f40140f4a7ff37f77e27ffc
0bc19/diff:/var/lib/docker/overlay2/d6377d4967be0eefdd83a8a27a1184fc01c28fc6fe5525b09b7ff59de3253ee2/diff:/var/lib/docker/overlay2/cd028181e647b680c76034fcc1e7c516ede314e45de48bf7dcdca3288b80f5bd/diff:/var/lib/docker/overlay2/8aef14538f95ec81f89337c7f29bd8660c76346e678d2efeeb01072595aece76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f371fa2460b6e35ae655deb62a379bb1c8a6d74a341cfcc32995adb1433ff2a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f371fa2460b6e35ae655deb62a379bb1c8a6d74a341cfcc32995adb1433ff2a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f371fa2460b6e35ae655deb62a379bb1c8a6d74a341cfcc32995adb1433ff2a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-629800",
	                "Source": "/var/lib/docker/volumes/running-upgrade-629800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-629800",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-629800",
	                "name.minikube.sigs.k8s.io": "running-upgrade-629800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81c379a5a934683c4b73960cec5ef0928928845bd07051486ffdeee9c86b1fcb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36572"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36571"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/81c379a5a934",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-629800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.41"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d4c94e394a2d",
	                        "running-upgrade-629800"
	                    ],
	                    "NetworkID": "c5d4d286ca12747bf96775f7070cd4d9d92e12530d42c1782f4745ed542d3577",
	                    "EndpointID": "abcf56d9c80c28b1c1efebf2df5ce577ab321f8e6716cac0c0e682f67da679da",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.41",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:29",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-629800 -n running-upgrade-629800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-629800 -n running-upgrade-629800: exit status 4 (520.708684ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 23:10:06.783554 2970241 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-629800" does not appear in /home/jenkins/minikube-integration/17243-2840729/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-629800" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-629800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-629800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-629800: (2.574140864s)
--- FAIL: TestRunningBinaryUpgrade (66.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4129087600.exe start -p missing-upgrade-595333 --memory=2200 --driver=docker  --container-runtime=crio
E0914 23:04:49.832849 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.4129087600.exe start -p missing-upgrade-595333 --memory=2200 --driver=docker  --container-runtime=crio: (2m7.973479901s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-595333
E0914 23:06:58.210991 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-595333: (2.099954471s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-595333
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-595333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-595333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (34.983143187s)

                                                
                                                
-- stdout --
	* [missing-upgrade-595333] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-595333 in cluster missing-upgrade-595333
	* Pulling base image ...
	* docker "missing-upgrade-595333" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:06:59.596942 2956975 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:06:59.597223 2956975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:06:59.597252 2956975 out.go:309] Setting ErrFile to fd 2...
	I0914 23:06:59.597272 2956975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:06:59.597560 2956975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:06:59.597973 2956975 out.go:303] Setting JSON to false
	I0914 23:06:59.599192 2956975 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82165,"bootTime":1694650655,"procs":359,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:06:59.599282 2956975 start.go:138] virtualization:  
	I0914 23:06:59.604581 2956975 out.go:177] * [missing-upgrade-595333] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:06:59.606578 2956975 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:06:59.606691 2956975 notify.go:220] Checking for updates...
	I0914 23:06:59.610711 2956975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:06:59.612359 2956975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:06:59.614127 2956975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:06:59.615946 2956975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:06:59.617610 2956975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:06:59.619752 2956975 config.go:182] Loaded profile config "missing-upgrade-595333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:06:59.622010 2956975 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 23:06:59.623784 2956975 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:06:59.664675 2956975 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:06:59.664774 2956975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:06:59.786944 2956975 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-09-14 23:06:59.776417001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:06:59.787050 2956975 docker.go:294] overlay module found
	I0914 23:06:59.790010 2956975 out.go:177] * Using the docker driver based on existing profile
	I0914 23:06:59.791970 2956975 start.go:298] selected driver: docker
	I0914 23:06:59.791983 2956975 start.go:902] validating driver "docker" against &{Name:missing-upgrade-595333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-595333 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.134 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:06:59.792073 2956975 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:06:59.792756 2956975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:06:59.909326 2956975 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-09-14 23:06:59.890970893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:06:59.909634 2956975 cni.go:84] Creating CNI manager for ""
	I0914 23:06:59.909645 2956975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:06:59.909659 2956975 start_flags.go:321] config:
	{Name:missing-upgrade-595333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-595333 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.134 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:06:59.911771 2956975 out.go:177] * Starting control plane node missing-upgrade-595333 in cluster missing-upgrade-595333
	I0914 23:06:59.913656 2956975 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:06:59.916730 2956975 out.go:177] * Pulling base image ...
	I0914 23:06:59.918400 2956975 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0914 23:06:59.918597 2956975 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0914 23:06:59.937994 2956975 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0914 23:06:59.938161 2956975 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0914 23:06:59.938816 2956975 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0914 23:06:59.991125 2956975 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0914 23:06:59.991273 2956975 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/missing-upgrade-595333/config.json ...
	I0914 23:06:59.991627 2956975 cache.go:107] acquiring lock: {Name:mkfb7a01b2c28b895311d739176129f21ced99a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.991694 2956975 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:06:59.991702 2956975 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.419µs
	I0914 23:06:59.991715 2956975 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:06:59.991727 2956975 cache.go:107] acquiring lock: {Name:mk76032a19b68713b35124ddc897f19026a7438e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.991812 2956975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0914 23:06:59.992142 2956975 cache.go:107] acquiring lock: {Name:mk502a05df334e4a26286e901955bc5afc10de99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.992252 2956975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0914 23:06:59.992488 2956975 cache.go:107] acquiring lock: {Name:mkf71df78f864c319bca733d50832feee5564baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.992613 2956975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0914 23:06:59.992810 2956975 cache.go:107] acquiring lock: {Name:mk944319bce43d0fa25bd88cf1d736630e27c361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.992930 2956975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0914 23:06:59.993396 2956975 cache.go:107] acquiring lock: {Name:mkb7d0b3abfb357f1e3ae0d11b10b748d2917268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:06:59.994706 2956975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0914 23:06:59.995247 2956975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0914 23:06:59.996105 2956975 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0914 23:06:59.996042 2956975 cache.go:107] acquiring lock: {Name:mk1214f169a1ad7693ec44c2ba0a0424a92b8633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:07:00.004765 2956975 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 23:06:59.996203 2956975 cache.go:107] acquiring lock: {Name:mka620cea4c285c172f61f5d6134ade35fc05543 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:07:00.006054 2956975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0914 23:07:00.003692 2956975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0914 23:07:00.010582 2956975 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 23:07:00.011732 2956975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 23:07:00.015695 2956975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 23:07:00.015807 2956975 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0914 23:07:00.465224 2956975 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0914 23:07:00.465332 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0914 23:07:00.486458 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0914 23:07:00.522949 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0914 23:07:00.523723 2956975 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0914 23:07:00.523821 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0914 23:07:00.527245 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0914 23:07:00.545407 2956975 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0914 23:07:00.545565 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0914 23:07:00.612274 2956975 cache.go:162] opening:  /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0914 23:07:00.686277 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0914 23:07:00.686350 2956975 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 692.958196ms
	I0914 23:07:00.686387 2956975 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  49.35 KiB / 287.99 MiB [>] 0.02% ? p/s ?I0914 23:07:00.971401 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0914 23:07:00.971845 2956975 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 975.639537ms
	I0914 23:07:00.971888 2956975 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  9.35 MiB / 287.99 MiB [>_] 3.24% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.03 MiB I0914 23:07:01.497939 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0914 23:07:01.498024 2956975 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.50520986s
	I0914 23:07:01.498062 2956975 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.03 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.03 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 40.26 MiB I0914 23:07:01.921643 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0914 23:07:01.921708 2956975 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.92997941s
	I0914 23:07:01.921736 2956975 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 40.26 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 40.26 MiB I0914 23:07:02.485238 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0914 23:07:02.485388 2956975 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.492900328s
	I0914 23:07:02.485567 2956975 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  38.16 MiB / 287.99 MiB  13.25% 38.98 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 38.98 MiB    > gcr.io/k8s-minikube/kicbase...:  51.87 MiB / 287.99 MiB  18.01% 38.98 MiBI0914 23:07:03.054484 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0914 23:07:03.054517 2956975 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 3.06238129s
	I0914 23:07:03.054531 2956975 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 39.65 MiB    > gcr.io/k8s-minikube/kicbase...:  67.85 MiB / 287.99 MiB  23.56% 39.65 MiB    > gcr.io/k8s-minikube/kicbase...:  77.79 MiB / 287.99 MiB  27.01% 39.65 MiB    > gcr.io/k8s-minikube/kicbase...:  104.06 MiB / 287.99 MiB  36.13% 40.99 Mi    > gcr.io/k8s-minikube/kicbase...:  122.72 MiB / 287.99 MiB  42.61% 40.99 Mi    > gcr.io/k8s-minikube/kicbase...:  141.24 MiB / 287.99 MiB  49.04% 40.99 Mi    > gcr.io/k8s-minikube/kicbase...:  159.71 MiB / 287.99 MiB  55.46% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  185.37 MiB / 287.99 MiB  64.37% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  207.55 MiB / 287.99 MiB  72.07% 46.62 Mi    > gcr.io/k8s-minikube/kicbase...:  210.06 MiB / 287.99 MiB  72.94% 46.62 Mi    > gcr.io/k8s-minikube/kicbase...:  230.82 MiB / 287.99 MiB  80.15% 46.62 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.
66% 46.88 Mi    > gcr.io/k8s-minikube/kicbase...:  256.17 MiB / 287.99 MiB  88.95% 46.88 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 46.88 Mi    > gcr.io/k8s-minikube/kicbase...:  273.24 MiB / 287.99 MiB  94.88% 47.64 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 47.64 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 47.64 MiI0914 23:07:06.623157 2956975 cache.go:157] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0914 23:07:06.623186 2956975 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 6.627150045s
	I0914 23:07:06.623199 2956975 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0914 23:07:06.623209 2956975 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.17 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.17 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.17 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 40.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 40.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 40.39 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 37.78 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 37.78 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 37.78 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100
.00% 35.35 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 35.35 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 35.35 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 32.63 MI0914 23:07:09.512948 2956975 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0914 23:07:09.512959 2956975 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0914 23:07:10.592551 2956975 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0914 23:07:10.592591 2956975 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:07:10.592629 2956975 start.go:365] acquiring machines lock for missing-upgrade-595333: {Name:mk055f8f64b090dab649946e10beab39fff2e0ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:07:10.592695 2956975 start.go:369] acquired machines lock for "missing-upgrade-595333" in 42.618µs
	I0914 23:07:10.592717 2956975 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:07:10.592729 2956975 fix.go:54] fixHost starting: 
	I0914 23:07:10.593025 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:10.628665 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:10.628726 2956975 fix.go:102] recreateIfNeeded on missing-upgrade-595333: state= err=unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:10.628765 2956975 fix.go:107] machineExists: false. err=machine does not exist
	I0914 23:07:10.630448 2956975 out.go:177] * docker "missing-upgrade-595333" container is missing, will recreate.
	I0914 23:07:10.632080 2956975 delete.go:124] DEMOLISHING missing-upgrade-595333 ...
	I0914 23:07:10.632180 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:10.650931 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	W0914 23:07:10.650988 2956975 stop.go:75] unable to get state: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:10.651006 2956975 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:10.651442 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:10.674849 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:10.674913 2956975 delete.go:82] Unable to get host status for missing-upgrade-595333, assuming it has already been deleted: state: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:10.674973 2956975 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-595333
	W0914 23:07:10.691882 2956975 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-595333 returned with exit code 1
	I0914 23:07:10.691913 2956975 kic.go:367] could not find the container missing-upgrade-595333 to remove it. will try anyways
	I0914 23:07:10.691967 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:10.714731 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	W0914 23:07:10.714785 2956975 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:10.714853 2956975 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-595333 /bin/bash -c "sudo init 0"
	W0914 23:07:10.733644 2956975 cli_runner.go:211] docker exec --privileged -t missing-upgrade-595333 /bin/bash -c "sudo init 0" returned with exit code 1
	I0914 23:07:10.733675 2956975 oci.go:647] error shutdown missing-upgrade-595333: docker exec --privileged -t missing-upgrade-595333 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:11.733866 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:11.753346 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:11.753428 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:11.753440 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:11.753468 2956975 retry.go:31] will retry after 616.982177ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:12.371384 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:12.390220 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:12.390343 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:12.390358 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:12.390383 2956975 retry.go:31] will retry after 888.245444ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:13.278826 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:13.296946 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:13.297008 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:13.297022 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:13.297047 2956975 retry.go:31] will retry after 1.468293798s: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:14.765535 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:14.784364 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:14.784426 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:14.784452 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:14.784482 2956975 retry.go:31] will retry after 1.017752597s: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:15.802684 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:15.820481 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:15.820557 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:15.820572 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:15.820601 2956975 retry.go:31] will retry after 2.92688308s: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:18.749722 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:18.784562 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:18.784629 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:18.784646 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:18.784671 2956975 retry.go:31] will retry after 2.467941922s: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:21.253551 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:21.273012 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:21.273073 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:21.273088 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:21.273114 2956975 retry.go:31] will retry after 4.710547835s: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:25.984815 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:26.000636 2956975 cli_runner.go:211] docker container inspect missing-upgrade-595333 --format={{.State.Status}} returned with exit code 1
	I0914 23:07:26.000701 2956975 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	I0914 23:07:26.000717 2956975 oci.go:661] temporary error: container missing-upgrade-595333 status is  but expect it to be exited
	I0914 23:07:26.000749 2956975 oci.go:88] couldn't shut down missing-upgrade-595333 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-595333": docker container inspect missing-upgrade-595333 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-595333
	 
	I0914 23:07:26.000838 2956975 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-595333
	I0914 23:07:26.018700 2956975 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-595333
	W0914 23:07:26.035159 2956975 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-595333 returned with exit code 1
	I0914 23:07:26.035243 2956975 cli_runner.go:164] Run: docker network inspect missing-upgrade-595333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 23:07:26.052644 2956975 cli_runner.go:164] Run: docker network rm missing-upgrade-595333
	I0914 23:07:26.149972 2956975 fix.go:114] Sleeping 1 second for extra luck!
	I0914 23:07:27.150778 2956975 start.go:125] createHost starting for "" (driver="docker")
	I0914 23:07:27.152887 2956975 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0914 23:07:27.153030 2956975 start.go:159] libmachine.API.Create for "missing-upgrade-595333" (driver="docker")
	I0914 23:07:27.153055 2956975 client.go:168] LocalClient.Create starting
	I0914 23:07:27.153149 2956975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem
	I0914 23:07:27.153192 2956975 main.go:141] libmachine: Decoding PEM data...
	I0914 23:07:27.153210 2956975 main.go:141] libmachine: Parsing certificate...
	I0914 23:07:27.153270 2956975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem
	I0914 23:07:27.153293 2956975 main.go:141] libmachine: Decoding PEM data...
	I0914 23:07:27.153307 2956975 main.go:141] libmachine: Parsing certificate...
	I0914 23:07:27.153566 2956975 cli_runner.go:164] Run: docker network inspect missing-upgrade-595333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 23:07:27.170031 2956975 cli_runner.go:211] docker network inspect missing-upgrade-595333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 23:07:27.170109 2956975 network_create.go:281] running [docker network inspect missing-upgrade-595333] to gather additional debugging logs...
	I0914 23:07:27.170132 2956975 cli_runner.go:164] Run: docker network inspect missing-upgrade-595333
	W0914 23:07:27.193106 2956975 cli_runner.go:211] docker network inspect missing-upgrade-595333 returned with exit code 1
	I0914 23:07:27.193139 2956975 network_create.go:284] error running [docker network inspect missing-upgrade-595333]: docker network inspect missing-upgrade-595333: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-595333 not found
	I0914 23:07:27.193152 2956975 network_create.go:286] output of [docker network inspect missing-upgrade-595333]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-595333 not found
	
	** /stderr **
	I0914 23:07:27.193217 2956975 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 23:07:27.215330 2956975 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1af2c56fe484 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bc:82:c7:51} reservation:<nil>}
	I0914 23:07:27.215699 2956975 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3d3dcc9eef60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9e:ba:b8:56} reservation:<nil>}
	I0914 23:07:27.216020 2956975 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e6f5401df9ff IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a4:88:d0:cb} reservation:<nil>}
	I0914 23:07:27.216441 2956975 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b0d50}
	I0914 23:07:27.216463 2956975 network_create.go:123] attempt to create docker network missing-upgrade-595333 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0914 23:07:27.216547 2956975 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-595333 missing-upgrade-595333
	I0914 23:07:27.289750 2956975 network_create.go:107] docker network missing-upgrade-595333 192.168.76.0/24 created
	I0914 23:07:27.289782 2956975 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-595333" container
	I0914 23:07:27.289854 2956975 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 23:07:27.307157 2956975 cli_runner.go:164] Run: docker volume create missing-upgrade-595333 --label name.minikube.sigs.k8s.io=missing-upgrade-595333 --label created_by.minikube.sigs.k8s.io=true
	I0914 23:07:27.324167 2956975 oci.go:103] Successfully created a docker volume missing-upgrade-595333
	I0914 23:07:27.324245 2956975 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-595333-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-595333 --entrypoint /usr/bin/test -v missing-upgrade-595333:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0914 23:07:28.955650 2956975 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-595333-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-595333 --entrypoint /usr/bin/test -v missing-upgrade-595333:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.631364087s)
	I0914 23:07:28.955681 2956975 oci.go:107] Successfully prepared a docker volume missing-upgrade-595333
	I0914 23:07:28.955699 2956975 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0914 23:07:28.955836 2956975 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 23:07:28.955952 2956975 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 23:07:29.033232 2956975 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-595333 --name missing-upgrade-595333 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-595333 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-595333 --network missing-upgrade-595333 --ip 192.168.76.2 --volume missing-upgrade-595333:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0914 23:07:29.396353 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Running}}
	I0914 23:07:29.423549 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	I0914 23:07:29.448409 2956975 cli_runner.go:164] Run: docker exec missing-upgrade-595333 stat /var/lib/dpkg/alternatives/iptables
	I0914 23:07:29.551061 2956975 oci.go:144] the created container "missing-upgrade-595333" has a running status.
	I0914 23:07:29.551087 2956975 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa...
	I0914 23:07:29.894604 2956975 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 23:07:29.929867 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	I0914 23:07:29.959607 2956975 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 23:07:29.959625 2956975 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-595333 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 23:07:30.053738 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	I0914 23:07:30.074998 2956975 machine.go:88] provisioning docker machine ...
	I0914 23:07:30.075030 2956975 ubuntu.go:169] provisioning hostname "missing-upgrade-595333"
	I0914 23:07:30.075116 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:30.103932 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:30.104390 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:30.104404 2956975 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-595333 && echo "missing-upgrade-595333" | sudo tee /etc/hostname
	I0914 23:07:30.311040 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-595333
	
	I0914 23:07:30.311186 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:30.340627 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:30.341033 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:30.341051 2956975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-595333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-595333/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-595333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:07:30.505848 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:07:30.505876 2956975 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:07:30.505915 2956975 ubuntu.go:177] setting up certificates
	I0914 23:07:30.505925 2956975 provision.go:83] configureAuth start
	I0914 23:07:30.506003 2956975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-595333
	I0914 23:07:30.527669 2956975 provision.go:138] copyHostCerts
	I0914 23:07:30.527734 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:07:30.527743 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:07:30.527818 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:07:30.527900 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:07:30.527905 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:07:30.527930 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:07:30.527981 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:07:30.527985 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:07:30.528009 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:07:30.528048 2956975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-595333 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-595333]
	I0914 23:07:30.828745 2956975 provision.go:172] copyRemoteCerts
	I0914 23:07:30.828814 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:07:30.828859 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:30.861486 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:30.970199 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:07:30.993800 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 23:07:31.015780 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 23:07:31.038182 2956975 provision.go:86] duration metric: configureAuth took 532.237876ms
	I0914 23:07:31.038241 2956975 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:07:31.038447 2956975 config.go:182] Loaded profile config "missing-upgrade-595333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:07:31.038557 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:31.058556 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:31.058977 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:31.059000 2956975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:07:31.497129 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:07:31.497151 2956975 machine.go:91] provisioned docker machine in 1.422133623s
	I0914 23:07:31.497160 2956975 client.go:171] LocalClient.Create took 4.344093648s
	I0914 23:07:31.497171 2956975 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-595333" took 4.344141813s
	I0914 23:07:31.497182 2956975 start.go:300] post-start starting for "missing-upgrade-595333" (driver="docker")
	I0914 23:07:31.497192 2956975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:07:31.497258 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:07:31.497303 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:31.515369 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:31.613838 2956975 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:07:31.617731 2956975 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:07:31.617752 2956975 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:07:31.617764 2956975 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:07:31.617771 2956975 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0914 23:07:31.617780 2956975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:07:31.617829 2956975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:07:31.617904 2956975 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:07:31.617999 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:07:31.627921 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:07:31.653433 2956975 start.go:303] post-start completed in 156.235779ms
	I0914 23:07:31.653856 2956975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-595333
	I0914 23:07:31.672264 2956975 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/missing-upgrade-595333/config.json ...
	I0914 23:07:31.672577 2956975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:07:31.672620 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:31.694991 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:31.791765 2956975 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:07:31.797811 2956975 start.go:128] duration metric: createHost completed in 4.646996278s
	I0914 23:07:31.797899 2956975 cli_runner.go:164] Run: docker container inspect missing-upgrade-595333 --format={{.State.Status}}
	W0914 23:07:31.821598 2956975 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:07:31.821625 2956975 machine.go:88] provisioning docker machine ...
	I0914 23:07:31.821642 2956975 ubuntu.go:169] provisioning hostname "missing-upgrade-595333"
	I0914 23:07:31.821717 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:31.839906 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:31.840311 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:31.840330 2956975 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-595333 && echo "missing-upgrade-595333" | sudo tee /etc/hostname
	I0914 23:07:31.998308 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-595333
	
	I0914 23:07:31.998391 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:32.019546 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:32.020075 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:32.020108 2956975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-595333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-595333/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-595333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:07:32.165385 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:07:32.165415 2956975 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:07:32.165432 2956975 ubuntu.go:177] setting up certificates
	I0914 23:07:32.165441 2956975 provision.go:83] configureAuth start
	I0914 23:07:32.165508 2956975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-595333
	I0914 23:07:32.185810 2956975 provision.go:138] copyHostCerts
	I0914 23:07:32.185876 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:07:32.185888 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:07:32.185966 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:07:32.186068 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:07:32.186077 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:07:32.186105 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:07:32.186161 2956975 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:07:32.186170 2956975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:07:32.186194 2956975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:07:32.186241 2956975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-595333 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-595333]
	I0914 23:07:32.339792 2956975 provision.go:172] copyRemoteCerts
	I0914 23:07:32.339889 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:07:32.339955 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:32.366776 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:32.472759 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 23:07:32.510055 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:07:32.559146 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 23:07:32.588864 2956975 provision.go:86] duration metric: configureAuth took 423.408871ms
	I0914 23:07:32.588935 2956975 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:07:32.589137 2956975 config.go:182] Loaded profile config "missing-upgrade-595333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:07:32.589286 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:32.619177 2956975 main.go:141] libmachine: Using SSH client type: native
	I0914 23:07:32.619598 2956975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36557 <nil> <nil>}
	I0914 23:07:32.619613 2956975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:07:32.972804 2956975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:07:32.972830 2956975 machine.go:91] provisioned docker machine in 1.151196563s
	I0914 23:07:32.972841 2956975 start.go:300] post-start starting for "missing-upgrade-595333" (driver="docker")
	I0914 23:07:32.972853 2956975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:07:32.972960 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:07:32.973037 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:32.992591 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:33.097100 2956975 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:07:33.103454 2956975 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:07:33.103478 2956975 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:07:33.103489 2956975 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:07:33.103497 2956975 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0914 23:07:33.103507 2956975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:07:33.103568 2956975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:07:33.103643 2956975 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:07:33.103744 2956975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:07:33.116891 2956975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:07:33.157434 2956975 start.go:303] post-start completed in 184.576644ms
	I0914 23:07:33.157524 2956975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:07:33.157566 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:33.193323 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:33.303665 2956975 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:07:33.311656 2956975 fix.go:56] fixHost completed within 22.718919304s
	I0914 23:07:33.311682 2956975 start.go:83] releasing machines lock for "missing-upgrade-595333", held for 22.718976009s
	I0914 23:07:33.311752 2956975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-595333
	I0914 23:07:33.349949 2956975 ssh_runner.go:195] Run: cat /version.json
	I0914 23:07:33.350007 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:33.350529 2956975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:07:33.352477 2956975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-595333
	I0914 23:07:33.426408 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	I0914 23:07:33.434625 2956975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36557 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/missing-upgrade-595333/id_rsa Username:docker}
	W0914 23:07:33.545423 2956975 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:07:33.545565 2956975 ssh_runner.go:195] Run: systemctl --version
	I0914 23:07:33.711074 2956975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:07:33.836788 2956975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 23:07:33.842894 2956975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:07:33.877534 2956975 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 23:07:33.877610 2956975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:07:33.947848 2956975 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:07:33.947911 2956975 start.go:469] detecting cgroup driver to use...
	I0914 23:07:33.947955 2956975 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 23:07:33.948036 2956975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:07:33.981133 2956975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:07:33.995080 2956975 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:07:33.995183 2956975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:07:34.010080 2956975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:07:34.023511 2956975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0914 23:07:34.071730 2956975 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0914 23:07:34.071800 2956975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:07:34.186148 2956975 docker.go:212] disabling docker service ...
	I0914 23:07:34.186219 2956975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:07:34.237134 2956975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:07:34.248660 2956975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:07:34.349347 2956975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:07:34.450516 2956975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:07:34.463086 2956975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:07:34.480404 2956975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 23:07:34.480487 2956975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:07:34.494890 2956975 out.go:177] 
	W0914 23:07:34.496873 2956975 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0914 23:07:34.496893 2956975 out.go:239] * 
	* 
	W0914 23:07:34.497822 2956975 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:07:34.500758 2956975 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-595333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-09-14 23:07:34.55469568 +0000 UTC m=+2454.149050951
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-595333
helpers_test.go:235: (dbg) docker inspect missing-upgrade-595333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9",
	        "Created": "2023-09-14T23:07:29.049361001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2957863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T23:07:29.387319872Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9/hosts",
	        "LogPath": "/var/lib/docker/containers/7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9/7a260d102f71d6d1284033059faa5cb420f61a532effb5b5b525203f9c157bf9-json.log",
	        "Name": "/missing-upgrade-595333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-595333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-595333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a7aa30f15840acf4ca21badaf875fa9387f95793e6c7861a78197e100b49a3f-init/diff:/var/lib/docker/overlay2/7c5da317103d95204ba0214f0ef1a12508e64901f3157e24af3061ffc1e4ab2c/diff:/var/lib/docker/overlay2/5b98c6906039704573521f04becd106cb41206690477bc867ea3640fe19e5f73/diff:/var/lib/docker/overlay2/fdbfa6ccd992f0a761a93d77fffe2579992788f3e9a42802da17d79942a55194/diff:/var/lib/docker/overlay2/290866a47da2b40e026d6e51e7d3d71c9f3cf2712e8e918186279402819a5925/diff:/var/lib/docker/overlay2/010e671dd7277ab4d6fcebea1e8ae45df32172f4d78b7eea01522879f30197e6/diff:/var/lib/docker/overlay2/8af3150eee58189e4e1d576013909d62ce383e0aa75c241d6d8b0503f1fb4e4c/diff:/var/lib/docker/overlay2/55b87605c19756005b2e142b6bdc1265e7896535dbf570b2ffaddba63fb9da79/diff:/var/lib/docker/overlay2/141869b4171db4cedae050e1d0fbc222650fc0a7e3d212d05165a963b3cfb63f/diff:/var/lib/docker/overlay2/1516efaba3656ff816f42db5d6a3c7f4b77229eeda6c88b519d45adfcfcc6632/diff:/var/lib/docker/overlay2/92b945
493089feb64934863631b4beb3d9ce01a378ac5c2407b66b04ea2447a3/diff:/var/lib/docker/overlay2/ac2c36ef4335422d624e243d8cf3e24d6c2f4def64122c3eb512f60a1511e93c/diff:/var/lib/docker/overlay2/ed674f6894c13f612fee33912fda53361d1e96c0cc7c2317d1d7e622631c797d/diff:/var/lib/docker/overlay2/fb1c936cf47fe32bea5c11e8a886bb34dbb54dbeb7cd1310b26e64054cbd168d/diff:/var/lib/docker/overlay2/bfeebc6bdcc2eb29846897e9745903c9991a97ae36e8aaf5dcc32d49c7339d4f/diff:/var/lib/docker/overlay2/c7725dc869d0db4f1c49ed5aec2aa1191926cca97adc897c2b7986c684ad2b3b/diff:/var/lib/docker/overlay2/72450eeb6564f18ff89d9e468cbc6c23b0aee367aaed3c264b20e46d1b6680ef/diff:/var/lib/docker/overlay2/b15973f157b59c3fb8bdf5b3e5dc0fb5c70a5e9020a3390e398bbbdb9c7626b3/diff:/var/lib/docker/overlay2/eb884593b025987e6bf0bfe1d4e8e5c6a630a9b8684c12476fbc7b9de54c7caa/diff:/var/lib/docker/overlay2/398506c118ae2f43c1c7e6d8a9e4068f5e29459732f0962d002f5253ac7f012e/diff:/var/lib/docker/overlay2/10ff2d64c17d34a0a16ba90719c4553fe5d1f1fce0007baa02150d416d373e4e/diff:/var/lib/d
ocker/overlay2/30dd13feffdf6d16bd1f976f7c6e9bf7067ea11ab7dd113d7324729a8c072126/diff:/var/lib/docker/overlay2/95b33caad65119fa23181fbc14e0333335aea3ab6010c3fbf867bdbdd932aaea/diff:/var/lib/docker/overlay2/2333ca70477bffbdf6db697076b3fbd311a41daeeab4b8aca70fe7d6140605a5/diff:/var/lib/docker/overlay2/f878ccf3e37cb9f9dd3dc428e304724363e474adb406b69b0ed570118bbf8411/diff:/var/lib/docker/overlay2/ba3ebfd4698573b0180c0d5dbe6b3e271de61e9ce939f6282489f611f93dccab/diff:/var/lib/docker/overlay2/3a2dc505be783b0d76de275a15692ece062997531a9e6d3d9cca9203bd6c54e0/diff:/var/lib/docker/overlay2/d89a051735d353577c1783a5bd354e3587cb3bc6a03edc3e57105af2f5646097/diff:/var/lib/docker/overlay2/a4b10c936ee226011173b47efe4ed6b80aa1c9aa58085214e8cef13b8182c584/diff:/var/lib/docker/overlay2/3087e720eca7180c891833d9e2d728303c69cf63d87670543e067c413559f17d/diff:/var/lib/docker/overlay2/26db0195c0003b439823635c3dd99f31398e8656fd6939d416472fa1f7da01fe/diff:/var/lib/docker/overlay2/9f5e8bb5a94074569bf65f0d1beb8167c1b5f40140f4a7ff37f77e27ffc
0bc19/diff:/var/lib/docker/overlay2/d6377d4967be0eefdd83a8a27a1184fc01c28fc6fe5525b09b7ff59de3253ee2/diff:/var/lib/docker/overlay2/cd028181e647b680c76034fcc1e7c516ede314e45de48bf7dcdca3288b80f5bd/diff:/var/lib/docker/overlay2/8aef14538f95ec81f89337c7f29bd8660c76346e678d2efeeb01072595aece76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a7aa30f15840acf4ca21badaf875fa9387f95793e6c7861a78197e100b49a3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a7aa30f15840acf4ca21badaf875fa9387f95793e6c7861a78197e100b49a3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a7aa30f15840acf4ca21badaf875fa9387f95793e6c7861a78197e100b49a3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-595333",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-595333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-595333",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-595333",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-595333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d68c52d093392b2f384ad29e8d5bfd12db765db586539b7d9f423940c5c91c6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36557"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36556"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36553"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36555"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36554"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d68c52d09339",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-595333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a260d102f71",
	                        "missing-upgrade-595333"
	                    ],
	                    "NetworkID": "4b75d7248414f124cc32cc0498aae8078dbaa63b3b8cce204110952e69debe0e",
	                    "EndpointID": "d2735fd1c7f90481edf595fe0d1ba4f8e5943b611a91d3c5f47fda13e5e506c2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-595333 -n missing-upgrade-595333
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-595333 -n missing-upgrade-595333: exit status 6 (418.983766ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 23:07:34.972083 2959111 status.go:415] kubeconfig endpoint: got: 192.168.59.134:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-595333" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-595333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-595333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-595333: (2.229845183s)
--- FAIL: TestMissingContainerUpgrade (169.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3821540780.exe start -p stopped-upgrade-686061 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0914 23:07:52.877433 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3821540780.exe start -p stopped-upgrade-686061 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.606253385s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3821540780.exe -p stopped-upgrade-686061 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3821540780.exe -p stopped-upgrade-686061 stop: (11.803433727s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-686061 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-686061 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.298110501s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-686061] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-686061 in cluster stopped-upgrade-686061
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-686061" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:08:53.724754 2964019 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:08:53.724999 2964019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:08:53.725030 2964019 out.go:309] Setting ErrFile to fd 2...
	I0914 23:08:53.725051 2964019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:08:53.725363 2964019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:08:53.725771 2964019 out.go:303] Setting JSON to false
	I0914 23:08:53.727032 2964019 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82279,"bootTime":1694650655,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:08:53.727128 2964019 start.go:138] virtualization:  
	I0914 23:08:53.729776 2964019 out.go:177] * [stopped-upgrade-686061] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:08:53.731478 2964019 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:08:53.733470 2964019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:08:53.731573 2964019 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0914 23:08:53.735177 2964019 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:08:53.736979 2964019 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:08:53.731623 2964019 notify.go:220] Checking for updates...
	I0914 23:08:53.740643 2964019 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:08:53.743904 2964019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:08:53.746087 2964019 config.go:182] Loaded profile config "stopped-upgrade-686061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:08:53.748607 2964019 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0914 23:08:53.750720 2964019 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:08:53.782659 2964019 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:08:53.782760 2964019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:08:53.875663 2964019 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0914 23:08:53.892849 2964019 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 23:08:53.883011444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:08:53.892949 2964019 docker.go:294] overlay module found
	I0914 23:08:53.895889 2964019 out.go:177] * Using the docker driver based on existing profile
	I0914 23:08:53.897796 2964019 start.go:298] selected driver: docker
	I0914 23:08:53.897814 2964019 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-686061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-686061 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.238 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:08:53.897917 2964019 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:08:53.898496 2964019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:08:53.970205 2964019 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 23:08:53.960793363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:08:53.970519 2964019 cni.go:84] Creating CNI manager for ""
	I0914 23:08:53.970535 2964019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:08:53.970552 2964019 start_flags.go:321] config:
	{Name:stopped-upgrade-686061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-686061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.238 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0914 23:08:53.972785 2964019 out.go:177] * Starting control plane node stopped-upgrade-686061 in cluster stopped-upgrade-686061
	I0914 23:08:53.974696 2964019 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:08:53.976454 2964019 out.go:177] * Pulling base image ...
	I0914 23:08:53.978161 2964019 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0914 23:08:53.978241 2964019 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0914 23:08:53.996123 2964019 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0914 23:08:53.996147 2964019 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0914 23:08:54.056597 2964019 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0914 23:08:54.056745 2964019 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/stopped-upgrade-686061/config.json ...
	I0914 23:08:54.056830 2964019 cache.go:107] acquiring lock: {Name:mkfb7a01b2c28b895311d739176129f21ced99a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.056916 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 23:08:54.056925 2964019 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.645µs
	I0914 23:08:54.056934 2964019 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 23:08:54.056941 2964019 cache.go:107] acquiring lock: {Name:mk76032a19b68713b35124ddc897f19026a7438e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.056970 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0914 23:08:54.056975 2964019 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:08:54.056986 2964019 cache.go:107] acquiring lock: {Name:mkf71df78f864c319bca733d50832feee5564baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057015 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0914 23:08:54.057015 2964019 start.go:365] acquiring machines lock for stopped-upgrade-686061: {Name:mke8489dac77306595fb882a1dfca74723057b9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057030 2964019 cache.go:107] acquiring lock: {Name:mk944319bce43d0fa25bd88cf1d736630e27c361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057054 2964019 start.go:369] acquired machines lock for "stopped-upgrade-686061" in 25.362µs
	I0914 23:08:54.057059 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0914 23:08:54.057065 2964019 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.037µs
	I0914 23:08:54.056976 2964019 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.528µs
	I0914 23:08:54.057073 2964019 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0914 23:08:54.057068 2964019 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:08:54.057021 2964019 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 36.283µs
	I0914 23:08:54.057082 2964019 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0914 23:08:54.057092 2964019 fix.go:54] fixHost starting: 
	I0914 23:08:54.057092 2964019 cache.go:107] acquiring lock: {Name:mkb7d0b3abfb357f1e3ae0d11b10b748d2917268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057129 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0914 23:08:54.057142 2964019 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 45.974µs
	I0914 23:08:54.057149 2964019 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0914 23:08:54.057156 2964019 cache.go:107] acquiring lock: {Name:mk502a05df334e4a26286e901955bc5afc10de99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057184 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0914 23:08:54.057189 2964019 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 34.051µs
	I0914 23:08:54.057195 2964019 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0914 23:08:54.057206 2964019 cache.go:107] acquiring lock: {Name:mk1214f169a1ad7693ec44c2ba0a0424a92b8633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057230 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0914 23:08:54.057235 2964019 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 30.195µs
	I0914 23:08:54.057241 2964019 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0914 23:08:54.057250 2964019 cache.go:107] acquiring lock: {Name:mka620cea4c285c172f61f5d6134ade35fc05543 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:08:54.057274 2964019 cache.go:115] /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0914 23:08:54.057279 2964019 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.506µs
	I0914 23:08:54.057285 2964019 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0914 23:08:54.057362 2964019 cli_runner.go:164] Run: docker container inspect stopped-upgrade-686061 --format={{.State.Status}}
	I0914 23:08:54.057076 2964019 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0914 23:08:54.057397 2964019 cache.go:87] Successfully saved all images to host disk.
	I0914 23:08:54.078334 2964019 fix.go:102] recreateIfNeeded on stopped-upgrade-686061: state=Stopped err=<nil>
	W0914 23:08:54.078364 2964019 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:08:54.080786 2964019 out.go:177] * Restarting existing docker container for "stopped-upgrade-686061" ...
	I0914 23:08:54.082732 2964019 cli_runner.go:164] Run: docker start stopped-upgrade-686061
	I0914 23:08:54.484686 2964019 cli_runner.go:164] Run: docker container inspect stopped-upgrade-686061 --format={{.State.Status}}
	I0914 23:08:54.518940 2964019 kic.go:426] container "stopped-upgrade-686061" state is running.
	I0914 23:08:54.519307 2964019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-686061
	I0914 23:08:54.552165 2964019 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/stopped-upgrade-686061/config.json ...
	I0914 23:08:54.552398 2964019 machine.go:88] provisioning docker machine ...
	I0914 23:08:54.552435 2964019 ubuntu.go:169] provisioning hostname "stopped-upgrade-686061"
	I0914 23:08:54.552519 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:54.578218 2964019 main.go:141] libmachine: Using SSH client type: native
	I0914 23:08:54.580879 2964019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36570 <nil> <nil>}
	I0914 23:08:54.580904 2964019 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-686061 && echo "stopped-upgrade-686061" | sudo tee /etc/hostname
	I0914 23:08:54.581583 2964019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36724->127.0.0.1:36570: read: connection reset by peer
	I0914 23:08:57.736719 2964019 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-686061
	
	I0914 23:08:57.736803 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:57.756456 2964019 main.go:141] libmachine: Using SSH client type: native
	I0914 23:08:57.756918 2964019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36570 <nil> <nil>}
	I0914 23:08:57.756945 2964019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-686061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-686061/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-686061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:08:57.897472 2964019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:08:57.897505 2964019 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:08:57.897538 2964019 ubuntu.go:177] setting up certificates
	I0914 23:08:57.897547 2964019 provision.go:83] configureAuth start
	I0914 23:08:57.897617 2964019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-686061
	I0914 23:08:57.918564 2964019 provision.go:138] copyHostCerts
	I0914 23:08:57.918624 2964019 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:08:57.918633 2964019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:08:57.918733 2964019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:08:57.918835 2964019 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:08:57.918841 2964019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:08:57.918868 2964019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:08:57.918918 2964019 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:08:57.918923 2964019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:08:57.918946 2964019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:08:57.918988 2964019 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-686061 san=[192.168.59.238 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-686061]
	I0914 23:08:58.206537 2964019 provision.go:172] copyRemoteCerts
	I0914 23:08:58.206614 2964019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:08:58.206664 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:58.230035 2964019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36570 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/stopped-upgrade-686061/id_rsa Username:docker}
	I0914 23:08:58.329265 2964019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:08:58.351710 2964019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 23:08:58.374718 2964019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 23:08:58.397084 2964019 provision.go:86] duration metric: configureAuth took 499.522988ms
	I0914 23:08:58.397110 2964019 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:08:58.397304 2964019 config.go:182] Loaded profile config "stopped-upgrade-686061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0914 23:08:58.397422 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:58.415345 2964019 main.go:141] libmachine: Using SSH client type: native
	I0914 23:08:58.415758 2964019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36570 <nil> <nil>}
	I0914 23:08:58.415778 2964019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:08:58.858027 2964019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:08:58.858050 2964019 machine.go:91] provisioned docker machine in 4.305635188s
	I0914 23:08:58.858063 2964019 start.go:300] post-start starting for "stopped-upgrade-686061" (driver="docker")
	I0914 23:08:58.858075 2964019 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:08:58.858138 2964019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:08:58.858183 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:58.875873 2964019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36570 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/stopped-upgrade-686061/id_rsa Username:docker}
	I0914 23:08:58.973370 2964019 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:08:58.977249 2964019 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:08:58.977278 2964019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:08:58.977290 2964019 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:08:58.977298 2964019 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0914 23:08:58.977308 2964019 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:08:58.977366 2964019 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:08:58.977459 2964019 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:08:58.977563 2964019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:08:58.986247 2964019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:08:59.008784 2964019 start.go:303] post-start completed in 150.703057ms
	I0914 23:08:59.008866 2964019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:08:59.008907 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:59.026314 2964019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36570 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/stopped-upgrade-686061/id_rsa Username:docker}
	I0914 23:08:59.122211 2964019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:08:59.127608 2964019 fix.go:56] fixHost completed within 5.070516919s
	I0914 23:08:59.127628 2964019 start.go:83] releasing machines lock for "stopped-upgrade-686061", held for 5.070565609s
	I0914 23:08:59.127693 2964019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-686061
	I0914 23:08:59.145845 2964019 ssh_runner.go:195] Run: cat /version.json
	I0914 23:08:59.145906 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:59.146187 2964019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:08:59.146250 2964019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-686061
	I0914 23:08:59.170009 2964019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36570 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/stopped-upgrade-686061/id_rsa Username:docker}
	I0914 23:08:59.177574 2964019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36570 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/stopped-upgrade-686061/id_rsa Username:docker}
	W0914 23:08:59.264743 2964019 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0914 23:08:59.264821 2964019 ssh_runner.go:195] Run: systemctl --version
	I0914 23:08:59.340925 2964019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:08:59.448824 2964019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 23:08:59.454474 2964019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:08:59.477789 2964019 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 23:08:59.477885 2964019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:08:59.506800 2964019 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 23:08:59.506820 2964019 start.go:469] detecting cgroup driver to use...
	I0914 23:08:59.506850 2964019 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 23:08:59.506897 2964019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:08:59.534771 2964019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:08:59.546623 2964019 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:08:59.546740 2964019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:08:59.558580 2964019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:08:59.569838 2964019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0914 23:08:59.581374 2964019 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0914 23:08:59.581442 2964019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:08:59.678729 2964019 docker.go:212] disabling docker service ...
	I0914 23:08:59.678794 2964019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:08:59.691633 2964019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:08:59.703809 2964019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:08:59.810788 2964019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:08:59.911902 2964019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:08:59.924388 2964019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:08:59.941783 2964019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 23:08:59.941885 2964019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:08:59.954244 2964019 out.go:177] 
	W0914 23:08:59.956074 2964019 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0914 23:08:59.956092 2964019 out.go:239] * 
	* 
	W0914 23:08:59.956993 2964019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 23:08:59.959156 2964019 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-686061 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (81.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (83.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-188837 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0914 23:11:42.928288 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-188837 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.616653259s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-188837] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-188837 in cluster pause-188837
	* Pulling base image ...
	* Updating the running docker "pause-188837" container ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-188837" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:11:04.547666 2974444 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:11:04.547907 2974444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:11:04.547934 2974444 out.go:309] Setting ErrFile to fd 2...
	I0914 23:11:04.547953 2974444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:11:04.548248 2974444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:11:04.548690 2974444 out.go:303] Setting JSON to false
	I0914 23:11:04.549834 2974444 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82409,"bootTime":1694650655,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:11:04.549976 2974444 start.go:138] virtualization:  
	I0914 23:11:04.553370 2974444 out.go:177] * [pause-188837] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:11:04.555446 2974444 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:11:04.557526 2974444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:11:04.555598 2974444 notify.go:220] Checking for updates...
	I0914 23:11:04.561837 2974444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:11:04.563846 2974444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:11:04.566121 2974444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:11:04.568145 2974444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:11:04.571124 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:11:04.571720 2974444 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:11:04.601368 2974444 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:11:04.601480 2974444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:11:04.685703 2974444 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:11:04.674761435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:11:04.685839 2974444 docker.go:294] overlay module found
	I0914 23:11:04.688888 2974444 out.go:177] * Using the docker driver based on existing profile
	I0914 23:11:04.691019 2974444 start.go:298] selected driver: docker
	I0914 23:11:04.691038 2974444 start.go:902] validating driver "docker" against &{Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:04.691177 2974444 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:11:04.691279 2974444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:11:04.759926 2974444 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:11:04.750066072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:11:04.760347 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:11:04.760364 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:11:04.760375 2974444 start_flags.go:321] config:
	{Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:04.762780 2974444 out.go:177] * Starting control plane node pause-188837 in cluster pause-188837
	I0914 23:11:04.764679 2974444 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:11:04.766482 2974444 out.go:177] * Pulling base image ...
	I0914 23:11:04.768513 2974444 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:11:04.768569 2974444 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 23:11:04.768594 2974444 cache.go:57] Caching tarball of preloaded images
	I0914 23:11:04.768603 2974444 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 23:11:04.768683 2974444 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 23:11:04.768696 2974444 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 23:11:04.768830 2974444 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/config.json ...
	I0914 23:11:04.787734 2974444 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 23:11:04.787758 2974444 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 23:11:04.787784 2974444 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:11:04.787815 2974444 start.go:365] acquiring machines lock for pause-188837: {Name:mka063723d4b6700976ea6407ac3c1ec17d43a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:11:04.787893 2974444 start.go:369] acquired machines lock for "pause-188837" in 50.363µs
	I0914 23:11:04.787917 2974444 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:11:04.787928 2974444 fix.go:54] fixHost starting: 
	I0914 23:11:04.788208 2974444 cli_runner.go:164] Run: docker container inspect pause-188837 --format={{.State.Status}}
	I0914 23:11:04.806509 2974444 fix.go:102] recreateIfNeeded on pause-188837: state=Running err=<nil>
	W0914 23:11:04.806565 2974444 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:11:04.808663 2974444 out.go:177] * Updating the running docker "pause-188837" container ...
	I0914 23:11:04.810732 2974444 machine.go:88] provisioning docker machine ...
	I0914 23:11:04.810776 2974444 ubuntu.go:169] provisioning hostname "pause-188837"
	I0914 23:11:04.810849 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:04.828956 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:04.829381 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:04.829394 2974444 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-188837 && echo "pause-188837" | sudo tee /etc/hostname
	I0914 23:11:04.991112 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-188837
	
	I0914 23:11:04.991222 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.011452 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:05.011866 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:05.011891 2974444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-188837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-188837/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-188837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:11:05.157746 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:11:05.157775 2974444 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:11:05.157794 2974444 ubuntu.go:177] setting up certificates
	I0914 23:11:05.157804 2974444 provision.go:83] configureAuth start
	I0914 23:11:05.157867 2974444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188837
	I0914 23:11:05.176416 2974444 provision.go:138] copyHostCerts
	I0914 23:11:05.176488 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:11:05.176641 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:11:05.176728 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:11:05.176862 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:11:05.176876 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:11:05.176908 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:11:05.176977 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:11:05.176986 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:11:05.177013 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:11:05.177064 2974444 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.pause-188837 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-188837]
	I0914 23:11:05.660441 2974444 provision.go:172] copyRemoteCerts
	I0914 23:11:05.660532 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:11:05.660582 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.678368 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:05.783074 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:11:05.811415 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 23:11:05.840113 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 23:11:05.868007 2974444 provision.go:86] duration metric: configureAuth took 710.187904ms
	I0914 23:11:05.868031 2974444 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:11:05.868244 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:11:05.868349 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.887485 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:05.887887 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:05.887902 2974444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:11:11.335581 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:11:11.335605 2974444 machine.go:91] provisioned docker machine in 6.524857182s
	I0914 23:11:11.335616 2974444 start.go:300] post-start starting for "pause-188837" (driver="docker")
	I0914 23:11:11.335626 2974444 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:11:11.335688 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:11:11.335737 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.365208 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:11.563934 2974444 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:11:11.592618 2974444 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:11:11.592655 2974444 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:11:11.592666 2974444 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:11:11.592674 2974444 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 23:11:11.592685 2974444 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:11:11.592745 2974444 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:11:11.592837 2974444 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:11:11.592947 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:11:11.627618 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:11:11.697755 2974444 start.go:303] post-start completed in 362.123663ms
	I0914 23:11:11.697844 2974444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:11:11.697897 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.733844 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:11.920063 2974444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:11:11.932169 2974444 fix.go:56] fixHost completed within 7.144232092s
	I0914 23:11:11.932191 2974444 start.go:83] releasing machines lock for "pause-188837", held for 7.144286402s
	I0914 23:11:11.932267 2974444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188837
	I0914 23:11:11.961776 2974444 ssh_runner.go:195] Run: cat /version.json
	I0914 23:11:11.961844 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.961784 2974444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:11:11.961937 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:12.031313 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:12.032691 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:12.329048 2974444 ssh_runner.go:195] Run: systemctl --version
	I0914 23:11:12.344393 2974444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:11:12.574254 2974444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 23:11:12.594601 2974444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:11:12.622917 2974444 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 23:11:12.623060 2974444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:11:12.650952 2974444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 23:11:12.651020 2974444 start.go:469] detecting cgroup driver to use...
	I0914 23:11:12.651065 2974444 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 23:11:12.651150 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:11:12.682304 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:11:12.709810 2974444 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:11:12.709918 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:11:12.752989 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:11:12.778991 2974444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 23:11:13.053546 2974444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:11:13.323443 2974444 docker.go:212] disabling docker service ...
	I0914 23:11:13.323575 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:11:13.384877 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:11:13.420070 2974444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:11:13.720055 2974444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:11:14.062535 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:11:14.095915 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:11:14.172141 2974444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 23:11:14.172205 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.212705 2974444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 23:11:14.212778 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.261056 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.327733 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.378301 2974444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:11:14.410989 2974444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:11:14.443201 2974444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:11:14.467790 2974444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:11:14.747234 2974444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 23:11:22.305411 2974444 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.558107779s)
	I0914 23:11:22.305438 2974444 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 23:11:22.305506 2974444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 23:11:22.311297 2974444 start.go:537] Will wait 60s for crictl version
	I0914 23:11:22.311366 2974444 ssh_runner.go:195] Run: which crictl
	I0914 23:11:22.315744 2974444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:11:22.358423 2974444 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 23:11:22.358507 2974444 ssh_runner.go:195] Run: crio --version
	I0914 23:11:22.406220 2974444 ssh_runner.go:195] Run: crio --version
	I0914 23:11:22.462128 2974444 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0914 23:11:22.464122 2974444 cli_runner.go:164] Run: docker network inspect pause-188837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 23:11:22.481350 2974444 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0914 23:11:22.486007 2974444 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:11:22.486075 2974444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:11:22.526083 2974444 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:11:22.526106 2974444 crio.go:415] Images already preloaded, skipping extraction
	I0914 23:11:22.526160 2974444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:11:22.565865 2974444 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:11:22.565889 2974444 cache_images.go:84] Images are preloaded, skipping loading
	I0914 23:11:22.565965 2974444 ssh_runner.go:195] Run: crio config
	I0914 23:11:22.627062 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:11:22.627086 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:11:22.627111 2974444 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 23:11:22.627130 2974444 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-188837 NodeName:pause-188837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:11:22.627275 2974444 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-188837"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:11:22.627351 2974444 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-188837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 23:11:22.627422 2974444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 23:11:22.638184 2974444 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:11:22.638267 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:11:22.648371 2974444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0914 23:11:22.668685 2974444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:11:22.689035 2974444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0914 23:11:22.709604 2974444 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0914 23:11:22.714013 2974444 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837 for IP: 192.168.76.2
	I0914 23:11:22.714049 2974444 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:11:22.714185 2974444 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 23:11:22.714231 2974444 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 23:11:22.714306 2974444 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key
	I0914 23:11:22.714375 2974444 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.key.31bdca25
	I0914 23:11:22.714429 2974444 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.key
	I0914 23:11:22.714546 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 23:11:22.714579 2974444 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 23:11:22.714591 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 23:11:22.714619 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 23:11:22.714646 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:11:22.714673 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 23:11:22.714726 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:11:22.715797 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 23:11:22.747305 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:11:22.775146 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:11:22.802231 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 23:11:22.829806 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:11:22.857834 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:11:22.886062 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:11:22.913519 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:11:22.941281 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 23:11:22.968583 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:11:22.996220 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 23:11:23.023935 2974444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:11:23.045041 2974444 ssh_runner.go:195] Run: openssl version
	I0914 23:11:23.052065 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:11:23.063557 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.068065 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.068126 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.076644 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:11:23.087423 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 23:11:23.099605 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.104190 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.104257 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.112868 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 23:11:23.123578 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 23:11:23.135088 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.139680 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.139750 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.148653 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:11:23.159472 2974444 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 23:11:23.163804 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:11:23.172129 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:11:23.180710 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:11:23.189162 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:11:23.197654 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:11:23.206148 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:11:23.214421 2974444 kubeadm.go:404] StartCluster: {Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:23.214540 2974444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 23:11:23.214600 2974444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:11:23.257814 2974444 cri.go:89] found id: "3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	I0914 23:11:23.257883 2974444 cri.go:89] found id: "3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b"
	I0914 23:11:23.257894 2974444 cri.go:89] found id: "bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45"
	I0914 23:11:23.257902 2974444 cri.go:89] found id: "d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678"
	I0914 23:11:23.257906 2974444 cri.go:89] found id: "b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	I0914 23:11:23.257912 2974444 cri.go:89] found id: "1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	I0914 23:11:23.257916 2974444 cri.go:89] found id: "f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197"
	I0914 23:11:23.257921 2974444 cri.go:89] found id: "3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb"
	I0914 23:11:23.257934 2974444 cri.go:89] found id: "c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55"
	I0914 23:11:23.257944 2974444 cri.go:89] found id: "2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868"
	I0914 23:11:23.257948 2974444 cri.go:89] found id: "a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c"
	I0914 23:11:23.257952 2974444 cri.go:89] found id: "a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b"
	I0914 23:11:23.257957 2974444 cri.go:89] found id: "75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a"
	I0914 23:11:23.257967 2974444 cri.go:89] found id: "1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a"
	I0914 23:11:23.257975 2974444 cri.go:89] found id: ""
	I0914 23:11:23.258026 2974444 ssh_runner.go:195] Run: sudo runc list -f json
	I0914 23:11:23.296936 2974444 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a/userdata","rootfs":"/var/lib/containers/storage/overlay/70ebd58a95c7434e2d147538967ad102fb6e66192010f008b9f4debb1b32d68a/merged","created":"2023-09-14T23:10:34.461302205Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b4bfd9d0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b4bfd9d0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.331603852Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9daa06f1bce90ea27262295fdd763f52\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-188837_9daa06f1bce90ea27262295fdd763f52/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/70ebd58a95c7434e2d147538967ad102fb6e66192010f008b9f4debb1b32d68a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/containers/kube-apiserver/64e4b917\",\"readonly\":false,\"propagation\":0,\"selinux_re
label\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-188837","io.k
ubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9daa06f1bce90ea27262295fdd763f52","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"9daa06f1bce90ea27262295fdd763f52","kubernetes.io/config.seen":"2023-09-14T23:10:33.801531701Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336/userdata","rootfs":"/var/lib/containers/storage/overlay/52aa0ded4b862138f24c8066d24452d9951215b3af2382a5d24724cf1990fd0b/merged","created":"2023-09-14T23:11:11.613927439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"68d78db8","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container
.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"68d78db8\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.408097823Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-lprw
g\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b888ea22-8d29-4c36-a973-02cd1262b1ae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-lprwg_b888ea22-8d29-4c36-a973-02cd1262b1ae/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/52aa0ded4b862138f24c8066d24452d9951215b3af2382a5d24724cf1990fd0b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"fa
lse","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/containers/kube-proxy/280f19d9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~projected/kube-api-access-d7stp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-lprwg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b888ea22-8d29-4c36-a973-02cd1262b1ae","kubernetes.io/config.seen":"2023-09-14T23:10:55.368412718Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868/userdata","rootfs":"/var/lib/containers/storage/overlay/22c8603f18526a7027605bff240cabde6868bd0978333972c222d3baa3bc683a/merged","created":"2023-09-14T23:10:56.995413759Z","annotations":{"io.container.manager":"c
ri-o","io.kubernetes.container.hash":"68d78db8","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"68d78db8\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:56.930186483Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259
a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-lprwg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b888ea22-8d29-4c36-a973-02cd1262b1ae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-lprwg_b888ea22-8d29-4c36-a973-02cd1262b1ae/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/22c8603f18526a7027605bff240cabde6868bd0978333972c222d3baa3bc683a/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b","io.kubernetes.cri-o.SandboxName":"k8s
_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/containers/kube-proxy/3366196d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8
d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~projected/kube-api-access-d7stp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-lprwg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b888ea22-8d29-4c36-a973-02cd1262b1ae","kubernetes.io/config.seen":"2023-09-14T23:10:55.368412718Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352/userdata","rootfs":"/var/lib/containers/storage/overlay/6a2219624a8
97ac1b01bf3deb522d9af38aa71aa1c91ce81e63a418dfb9d94b0/merged","created":"2023-09-14T23:11:11.665297124Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4ad3610d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4ad3610d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernet
es.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.566603167Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-fsjl2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"67bad9d6-02e3-402b-b63e-83403a6c00c4\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_coredns-5dd5756b68-fsjl2_67bad9d6-02e3-402b-b63e-83403a6c00c4/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6a2219624a897ac1b01bf3deb522d9af38aa71aa1c91ce81e63a418dfb9d94b0/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/et
c/coredns\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/containers/coredns/3a1aaeed\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~projected/kube-api-access-q2bfz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-fsjl2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.p
od.uid":"67bad9d6-02e3-402b-b63e-83403a6c00c4","kubernetes.io/config.seen":"2023-09-14T23:10:59.342925004Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb/userdata","rootfs":"/var/lib/containers/storage/overlay/bc032752c44f88a9111693661950ebb049ec6208c01f2ad2b51617a384b91220/merged","created":"2023-09-14T23:10:59.814350642Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4ad3610d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath
":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4ad3610d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:59.766653882Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa786
27c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-fsjl2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"67bad9d6-02e3-402b-b63e-83403a6c00c4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-fsjl2_67bad9d6-02e3-402b-b63e-83403a6c00c4/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bc032752c44f88a9111693661950ebb049ec6208c01f2ad2b51617a384b91220/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf0309
83c6f1f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/containers/coredns/e12d93bf\",\"readonly\":false,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~projected/kube-api-access-q2bfz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-fsjl2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"67bad9d6-02e3-402b-b63e-83403a6c00c4","kubernetes.io/config.seen":"2023-09-14T23:10:59.342925004Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b/userdata","rootfs":"/var/lib/containers/storage/overlay/f681e4c26c0d53de438ce4443108563066d04678d8b95042f1e7c8f2883a7283/merged","created":"2023-09-14T23:11:11.688
966542Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3673094b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3673094b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.539627207Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.Imag
eRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"19dcc362ef0990caebeed73c36545e51\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-188837_19dcc362ef0990caebeed73c36545e51/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f681e4c26c0d53de438ce4443108563066d04678d8b95042f1e7c8f2883a7283/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197","io.kubernetes.cri-o
.SandboxName":"k8s_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/containers/etcd/ddfa2bff\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-188837","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"19dcc362ef0990caebeed73c36545e51","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"19dcc362ef0990caebeed73c36545e51","kubernetes.io/config.seen":"2023-09-14T23:10:33.801525646Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a/userdata","rootfs":"/var/lib/containers/storage/overlay/b8cec56e9ebe7d589282b9c416e8ff9dd0ee1735de27e0dbdf8db45f0fc0dc08/merged","created":"2023-09-14T23:10:34.450528796Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.containe
r.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.342932748Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-s
cheduler-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0dd7249489e06a79323b7c83c9463f99\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-188837_0dd7249489e06a79323b7c83c9463f99/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8cec56e9ebe7d589282b9c416e8ff9dd0ee1735de27e0dbdf8db45f0fc0dc08/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.SeccompProfilePath"
:"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/containers/kube-scheduler/42f2c69d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.hash":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.seen":"2023-09-14T2
3:10:33.801534310Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c/userdata","rootfs":"/var/lib/containers/storage/overlay/5a0758e92f8398386486b4987e5a131680da821d4f8959a32ff098de12487e35/merged","created":"2023-09-14T23:10:34.470518676Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kuberne
tes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.370627075Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e1a7fbd0dd15eaf0a22aeb6173189\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-188837_560e1a7fbd0dd15eaf0a22aeb6173189/kube-controller-manager/0.log","i
o.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5a0758e92f8398386486b4987e5a131680da821d4f8959a32ff098de12487e35/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":
true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/containers/kube-controller-manager/ff149f05\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs
\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.hash":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.seen":"2023-09-14T23:10:33.801533211Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a11c
c7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b/userdata","rootfs":"/var/lib/containers/storage/overlay/e579ccb70c78334c31784b17607543010e536ecd07f0a7fdf0357a4bd33f7e28/merged","created":"2023-09-14T23:10:34.463545092Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3673094b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3673094b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.C
reated":"2023-09-14T23:10:34.359410006Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"19dcc362ef0990caebeed73c36545e51\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-188837_19dcc362ef0990caebeed73c36545e51/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e579ccb70c78334c31784b17607543010e536ecd07f0a7fdf0357a4bd33f7e28/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-contain
ers/08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/containers/etcd/190c0400\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"19dcc362ef0990caebeed73c36545e51","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"19dcc362ef0990caebeed73c36545e51","kubernetes.io/config.seen":"2023-09-14T23:10:33.801525646Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b/userdata","rootfs":"/var/lib/containers/storage/overlay/5c86bcdb682e3aba66a2887a379fce94fb8f5009a413ddda449f8780e2df1a17/merged","created":"2023-09-14T23:11:11.6510
92931Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9867e7ac","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9867e7ac\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.449186964Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","i
o.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw9vg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw9vg_fe2fe062-01ec-4c26-b6d1-c181f2d685ea/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5c86bcdb682e3aba66a2887a379fce94fb8f5009a413ddda449f8780e2df1a17/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4efca5701c8659f9d6d0ed03cc5a55bcf0d
e0b2a7eef3ffb2e26abcd585b7bcd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/containers/kindnet-cni/2024cb12\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/et
c/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/volumes/kubernetes.io~projected/kube-api-access-m5bj5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw9vg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fe2fe062-01ec-4c26-b6d1-c181f2d685ea","kubernetes.io/config.seen":"2023-09-14T23:10:55.366916621Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45/userdata","rootfs":"/var/lib/containers/storage/overlay/f145c2a1a39383851b7173e92d1a0ae7c99102
b72466daa33bea293db88d8d83/merged","created":"2023-09-14T23:11:11.631061164Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.492064322Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kuber
netes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0dd7249489e06a79323b7c83c9463f99\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-188837_0dd7249489e06a79323b7c83c9463f99/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f145c2a1a39383851b7173e92d1a0ae7c99102b72466daa33bea293db88d8d83/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/81f238f5cfc9e6da272d347d17be3ba3db4bd02285a3136
5712b48f6dc3d2bfa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/containers/kube-scheduler/2ece41b1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.p
od.name":"kube-scheduler-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.hash":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.seen":"2023-09-14T23:10:33.801534310Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55/userdata","rootfs":"/var/lib/containers/storage/overlay/50cc30cf6f991e4650ff1db34e61a3373d1122070c3ee9717a04ef7b56294690/merged","created":"2023-09-14T23:10:58.592604365Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9867e7ac","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/terminatio
n-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9867e7ac\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:58.539167702Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw
9vg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw9vg_fe2fe062-01ec-4c26-b6d1-c181f2d685ea/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/50cc30cf6f991e4650ff1db34e61a3373d1122070c3ee9717a04ef7b56294690/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernete
s.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/containers/kindnet-cni/fbd9b8c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-0
1ec-4c26-b6d1-c181f2d685ea/volumes/kubernetes.io~projected/kube-api-access-m5bj5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw9vg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fe2fe062-01ec-4c26-b6d1-c181f2d685ea","kubernetes.io/config.seen":"2023-09-14T23:10:55.366916621Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678/userdata","rootfs":"/var/lib/containers/storage/overlay/64d13becf5862b04293b7a84c46254dda47408bfba2235c88c3663de009504cd/merged","created":"2023-09-14T23:11:11.640199218Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.ku
bernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.454570083Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.co
ntainer.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e1a7fbd0dd15eaf0a22aeb6173189\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-188837_560e1a7fbd0dd15eaf0a22aeb6173189/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/64d13becf5862b04293b7a84c46254dda47408bfba2235c88c3663de009504cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996","io.kuber
netes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/containers/kube-controller-manager/80d214d3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubern
etes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.ui
d":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.hash":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.seen":"2023-09-14T23:10:33.801533211Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197/userdata","rootfs":"/var/lib/containers/storage/overlay/08bc0324fcd6582549b7b04fe9e9074dc9a2b0729367aa2e7ca6f6ed40b39e95/merged","created":"2023-09-14T23:11:11.623486334Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b4bfd9d0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b4bfd9d0\",\"
io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.383307018Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9daa06f1bce90ea27262295fdd763f52\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-syst
em_kube-apiserver-pause-188837_9daa06f1bce90ea27262295fdd763f52/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/08bc0324fcd6582549b7b04fe9e9074dc9a2b0729367aa2e7ca6f6ed40b39e95/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/term
ination-log\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/containers/kube-apiserver/3df15ec9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"hos
t_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9daa06f1bce90ea27262295fdd763f52","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"9daa06f1bce90ea27262295fdd763f52","kubernetes.io/config.seen":"2023-09-14T23:10:33.801531701Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0914 23:11:23.297867 2974444 cri.go:126] list returned 14 containers
	I0914 23:11:23.297882 2974444 cri.go:129] container: {ID:1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a Status:stopped}
	I0914 23:11:23.297898 2974444 cri.go:135] skipping {1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297908 2974444 cri.go:129] container: {ID:1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 Status:stopped}
	I0914 23:11:23.297921 2974444 cri.go:135] skipping {1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297930 2974444 cri.go:129] container: {ID:2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868 Status:stopped}
	I0914 23:11:23.297939 2974444 cri.go:135] skipping {2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297948 2974444 cri.go:129] container: {ID:3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 Status:stopped}
	I0914 23:11:23.297955 2974444 cri.go:135] skipping {3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297961 2974444 cri.go:129] container: {ID:3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb Status:stopped}
	I0914 23:11:23.297971 2974444 cri.go:135] skipping {3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297977 2974444 cri.go:129] container: {ID:3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b Status:stopped}
	I0914 23:11:23.297986 2974444 cri.go:135] skipping {3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297993 2974444 cri.go:129] container: {ID:75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a Status:stopped}
	I0914 23:11:23.297999 2974444 cri.go:135] skipping {75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298008 2974444 cri.go:129] container: {ID:a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c Status:stopped}
	I0914 23:11:23.298018 2974444 cri.go:135] skipping {a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298027 2974444 cri.go:129] container: {ID:a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b Status:stopped}
	I0914 23:11:23.298036 2974444 cri.go:135] skipping {a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298042 2974444 cri.go:129] container: {ID:b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b Status:stopped}
	I0914 23:11:23.298049 2974444 cri.go:135] skipping {b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298058 2974444 cri.go:129] container: {ID:bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 Status:stopped}
	I0914 23:11:23.298064 2974444 cri.go:135] skipping {bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298072 2974444 cri.go:129] container: {ID:c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55 Status:stopped}
	I0914 23:11:23.298079 2974444 cri.go:135] skipping {c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298084 2974444 cri.go:129] container: {ID:d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 Status:stopped}
	I0914 23:11:23.298091 2974444 cri.go:135] skipping {d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298100 2974444 cri.go:129] container: {ID:f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197 Status:stopped}
	I0914 23:11:23.298107 2974444 cri.go:135] skipping {f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298166 2974444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:11:23.308773 2974444 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 23:11:23.308831 2974444 kubeadm.go:636] restartCluster start
	I0914 23:11:23.308913 2974444 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:11:23.319081 2974444 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.319763 2974444 kubeconfig.go:92] found "pause-188837" server: "https://192.168.76.2:8443"
	I0914 23:11:23.320823 2974444 kapi.go:59] client config for pause-188837: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:11:23.322554 2974444 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:11:23.334356 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.334473 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.346384 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.346441 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.346495 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.358010 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.858758 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.858854 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.872122 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:24.358648 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:24.358736 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:24.370929 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:24.858755 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:24.858851 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:24.871166 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:25.358845 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:25.358951 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:25.370830 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:25.858193 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:25.858276 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:25.870460 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:26.359101 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:26.359183 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:26.371106 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:26.858172 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:26.858258 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:26.872633 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:27.358177 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:27.358256 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:27.381928 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:27.858170 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:27.858251 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:27.876932 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:28.358332 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:28.358412 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:28.370781 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:28.858203 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:28.858293 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:28.870063 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:29.358212 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:29.358299 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:29.370269 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:29.859002 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:29.859084 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:29.871448 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:30.359101 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:30.359189 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:30.371382 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:30.858572 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:30.858638 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:30.893313 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:31.358632 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:31.358722 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:31.391068 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:31.858658 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:31.858740 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:31.871191 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:32.358835 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:32.358922 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:32.371681 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:32.858198 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:32.858306 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:32.870241 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:33.334719 2974444 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 23:11:33.334749 2974444 kubeadm.go:1128] stopping kube-system containers ...
	I0914 23:11:33.334762 2974444 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 23:11:33.334831 2974444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:11:33.378497 2974444 cri.go:89] found id: "af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd"
	I0914 23:11:33.378516 2974444 cri.go:89] found id: "cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092"
	I0914 23:11:33.378522 2974444 cri.go:89] found id: "7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648"
	I0914 23:11:33.378526 2974444 cri.go:89] found id: "3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	I0914 23:11:33.378530 2974444 cri.go:89] found id: "3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b"
	I0914 23:11:33.378536 2974444 cri.go:89] found id: "bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45"
	I0914 23:11:33.378540 2974444 cri.go:89] found id: "d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678"
	I0914 23:11:33.378544 2974444 cri.go:89] found id: "b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	I0914 23:11:33.378548 2974444 cri.go:89] found id: "1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	I0914 23:11:33.378559 2974444 cri.go:89] found id: "f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197"
	I0914 23:11:33.378563 2974444 cri.go:89] found id: ""
	I0914 23:11:33.378568 2974444 cri.go:234] Stopping containers: [af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092 7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648 3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b 1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197]
	I0914 23:11:33.378621 2974444 ssh_runner.go:195] Run: which crictl
	I0914 23:11:33.383494 2974444 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092 7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648 3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b 1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197
	I0914 23:11:33.958673 2974444 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:11:34.063570 2974444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:11:34.079282 2974444 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 14 23:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 14 23:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 14 23:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 14 23:10 /etc/kubernetes/scheduler.conf
	
	I0914 23:11:34.079355 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 23:11:34.091262 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 23:11:34.103194 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 23:11:34.113645 2974444 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:34.113708 2974444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:11:34.123532 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 23:11:34.133542 2974444 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:34.133609 2974444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:11:34.143725 2974444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:11:34.154448 2974444 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 23:11:34.154473 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:34.230124 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.564733 2974444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.334569585s)
	I0914 23:11:37.564777 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.762970 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.845599 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.932078 2974444 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:11:37.932150 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:11:37.972607 2974444 api_server.go:72] duration metric: took 40.528346ms to wait for apiserver process to appear ...
	I0914 23:11:37.972634 2974444 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:11:37.972651 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:42.973660 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:42.973698 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:47.973981 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:48.474627 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:53.475808 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:53.475848 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.470967 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:55796->192.168.76.2:8443: read: connection reset by peer
	I0914 23:11:54.471003 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.471312 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:54.474522 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.474847 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:54.974784 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.975103 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:55.474819 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:55.475217 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:55.974666 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:55.975038 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:56.474736 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:56.475086 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:56.974725 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:01.975361 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:12:01.975394 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.180748 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 23:12:03.180774 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 23:12:03.180790 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.264523 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 23:12:03.264597 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 23:12:03.474770 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.483979 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:03.484000 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:03.974133 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.989403 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:03.989480 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:04.474949 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:04.504167 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:04.504242 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:04.974104 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:05.006252 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0914 23:12:05.052031 2974444 api_server.go:141] control plane version: v1.28.1
	I0914 23:12:05.052057 2974444 api_server.go:131] duration metric: took 27.079416243s to wait for apiserver health ...
	I0914 23:12:05.052067 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:12:05.052074 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:12:05.055138 2974444 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 23:12:05.057759 2974444 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 23:12:05.073734 2974444 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 23:12:05.073751 2974444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 23:12:05.118804 2974444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 23:12:06.329020 2974444 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.210181727s)
	I0914 23:12:06.329048 2974444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:12:06.348758 2974444 system_pods.go:59] 7 kube-system pods found
	I0914 23:12:06.349864 2974444 system_pods.go:61] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:06.349899 2974444 system_pods.go:61] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:06.349924 2974444 system_pods.go:61] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:06.349948 2974444 system_pods.go:61] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 23:12:06.349971 2974444 system_pods.go:61] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:06.350003 2974444 system_pods.go:61] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:06.350026 2974444 system_pods.go:61] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:06.350047 2974444 system_pods.go:74] duration metric: took 20.99262ms to wait for pod list to return data ...
	I0914 23:12:06.350066 2974444 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:12:06.355026 2974444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 23:12:06.355092 2974444 node_conditions.go:123] node cpu capacity is 2
	I0914 23:12:06.355116 2974444 node_conditions.go:105] duration metric: took 5.031644ms to run NodePressure ...
	I0914 23:12:06.355147 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:12:06.695648 2974444 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 23:12:06.704784 2974444 kubeadm.go:787] kubelet initialised
	I0914 23:12:06.704853 2974444 kubeadm.go:788] duration metric: took 9.148483ms waiting for restarted kubelet to initialise ...
	I0914 23:12:06.704875 2974444 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:06.713967 2974444 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.728852 2974444 pod_ready.go:92] pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:06.728920 2974444 pod_ready.go:81] duration metric: took 14.880795ms waiting for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.728947 2974444 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.743535 2974444 pod_ready.go:92] pod "etcd-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:06.743604 2974444 pod_ready.go:81] duration metric: took 14.634289ms waiting for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.743633 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:08.783079 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:10.784465 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:13.283225 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:15.287316 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:15.794892 2974444 pod_ready.go:92] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.794915 2974444 pod_ready.go:81] duration metric: took 9.051260798s waiting for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.794927 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.809173 2974444 pod_ready.go:92] pod "kube-controller-manager-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.809192 2974444 pod_ready.go:81] duration metric: took 14.257592ms waiting for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.809204 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.818886 2974444 pod_ready.go:92] pod "kube-proxy-lprwg" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.818953 2974444 pod_ready.go:81] duration metric: took 9.740203ms waiting for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.818979 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.834530 2974444 pod_ready.go:92] pod "kube-scheduler-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.834599 2974444 pod_ready.go:81] duration metric: took 15.597858ms waiting for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.834623 2974444 pod_ready.go:38] duration metric: took 9.129724991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:15.834672 2974444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:12:15.850268 2974444 ops.go:34] apiserver oom_adj: -16
	I0914 23:12:15.850301 2974444 kubeadm.go:640] restartCluster took 52.541449512s
	I0914 23:12:15.850310 2974444 kubeadm.go:406] StartCluster complete in 52.635899566s
	I0914 23:12:15.850326 2974444 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:15.850399 2974444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:12:15.851384 2974444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:15.851696 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 23:12:15.851994 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:12:15.852117 2974444 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 23:12:15.854449 2974444 out.go:177] * Enabled addons: 
	I0914 23:12:15.852605 2974444 kapi.go:59] client config for pause-188837: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:12:15.856393 2974444 addons.go:502] enable addons completed in 4.268535ms: enabled=[]
	I0914 23:12:15.869598 2974444 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-188837" context rescaled to 1 replicas
	I0914 23:12:15.869681 2974444 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:12:15.871709 2974444 out.go:177] * Verifying Kubernetes components...
	I0914 23:12:15.873843 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:16.013326 2974444 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 23:12:16.013369 2974444 node_ready.go:35] waiting up to 6m0s for node "pause-188837" to be "Ready" ...
	I0914 23:12:16.016463 2974444 node_ready.go:49] node "pause-188837" has status "Ready":"True"
	I0914 23:12:16.016486 2974444 node_ready.go:38] duration metric: took 3.104442ms waiting for node "pause-188837" to be "Ready" ...
	I0914 23:12:16.016520 2974444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:16.022660 2974444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.181590 2974444 pod_ready.go:92] pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.181613 2974444 pod_ready.go:81] duration metric: took 158.921262ms waiting for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.181626 2974444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.581524 2974444 pod_ready.go:92] pod "etcd-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.581595 2974444 pod_ready.go:81] duration metric: took 399.960574ms waiting for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.581620 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.982027 2974444 pod_ready.go:92] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.982050 2974444 pod_ready.go:81] duration metric: took 400.42285ms waiting for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.982063 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.381446 2974444 pod_ready.go:92] pod "kube-controller-manager-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:17.381516 2974444 pod_ready.go:81] duration metric: took 399.44384ms waiting for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.381544 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.780952 2974444 pod_ready.go:92] pod "kube-proxy-lprwg" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:17.781022 2974444 pod_ready.go:81] duration metric: took 399.456312ms waiting for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.781047 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:18.195041 2974444 pod_ready.go:92] pod "kube-scheduler-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:18.195114 2974444 pod_ready.go:81] duration metric: took 414.043774ms waiting for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:18.195138 2974444 pod_ready.go:38] duration metric: took 2.178606476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:18.195167 2974444 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:12:18.195254 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:12:18.213428 2974444 api_server.go:72] duration metric: took 2.34369082s to wait for apiserver process to appear ...
	I0914 23:12:18.213495 2974444 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:12:18.213529 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:18.222994 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0914 23:12:18.224905 2974444 api_server.go:141] control plane version: v1.28.1
	I0914 23:12:18.224926 2974444 api_server.go:131] duration metric: took 11.411333ms to wait for apiserver health ...
	I0914 23:12:18.224934 2974444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:12:18.385978 2974444 system_pods.go:59] 7 kube-system pods found
	I0914 23:12:18.386014 2974444 system_pods.go:61] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:18.386021 2974444 system_pods.go:61] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:18.386027 2974444 system_pods.go:61] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:18.386051 2974444 system_pods.go:61] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running
	I0914 23:12:18.386069 2974444 system_pods.go:61] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:18.386075 2974444 system_pods.go:61] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:18.386086 2974444 system_pods.go:61] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:18.386092 2974444 system_pods.go:74] duration metric: took 161.152326ms to wait for pod list to return data ...
	I0914 23:12:18.386106 2974444 default_sa.go:34] waiting for default service account to be created ...
	I0914 23:12:18.589115 2974444 default_sa.go:45] found service account: "default"
	I0914 23:12:18.589135 2974444 default_sa.go:55] duration metric: took 203.022857ms for default service account to be created ...
	I0914 23:12:18.589146 2974444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 23:12:18.784665 2974444 system_pods.go:86] 7 kube-system pods found
	I0914 23:12:18.784746 2974444 system_pods.go:89] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:18.784770 2974444 system_pods.go:89] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:18.784797 2974444 system_pods.go:89] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:18.784828 2974444 system_pods.go:89] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running
	I0914 23:12:18.784854 2974444 system_pods.go:89] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:18.784881 2974444 system_pods.go:89] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:18.784907 2974444 system_pods.go:89] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:18.784931 2974444 system_pods.go:126] duration metric: took 195.77766ms to wait for k8s-apps to be running ...
	I0914 23:12:18.784953 2974444 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 23:12:18.785023 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:18.799837 2974444 system_svc.go:56] duration metric: took 14.874305ms WaitForService to wait for kubelet.
	I0914 23:12:18.799860 2974444 kubeadm.go:581] duration metric: took 2.930131172s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 23:12:18.799878 2974444 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:12:18.981756 2974444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 23:12:18.981782 2974444 node_conditions.go:123] node cpu capacity is 2
	I0914 23:12:18.981792 2974444 node_conditions.go:105] duration metric: took 181.909343ms to run NodePressure ...
	I0914 23:12:18.981804 2974444 start.go:228] waiting for startup goroutines ...
	I0914 23:12:18.981811 2974444 start.go:233] waiting for cluster config update ...
	I0914 23:12:18.981818 2974444 start.go:242] writing updated cluster config ...
	I0914 23:12:18.982136 2974444 ssh_runner.go:195] Run: rm -f paused
	I0914 23:12:19.077458 2974444 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 23:12:19.079949 2974444 out.go:177] * Done! kubectl is now configured to use "pause-188837" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-188837
helpers_test.go:235: (dbg) docker inspect pause-188837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54",
	        "Created": "2023-09-14T23:10:15.86430085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2971242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T23:10:16.234828749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/hostname",
	        "HostsPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/hosts",
	        "LogPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54-json.log",
	        "Name": "/pause-188837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-188837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-188837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-188837",
	                "Source": "/var/lib/docker/volumes/pause-188837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-188837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-188837",
	                "name.minikube.sigs.k8s.io": "pause-188837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b38fbaa301d8e5c882d9ff023f8008a2135ca03425cf0c30950c6428d6b6116",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36579"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36578"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36575"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36577"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36576"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2b38fbaa301d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-188837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "238fdd0a23dd",
	                        "pause-188837"
	                    ],
	                    "NetworkID": "22fc45c87a68c0c8994f05a99ada433a32bf4fab19f3b1153960f5158ea51118",
	                    "EndpointID": "fd54ae9dc21225805e65762df1ba27d63f93a4c7e19527a90c861d51035e502a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-188837 -n pause-188837
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-188837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-188837 logs -n 25: (2.430030233s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-770080       | scheduled-stop-770080       | jenkins | v1.31.2 | 14 Sep 23 23:03 UTC | 14 Sep 23 23:04 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-770080       | scheduled-stop-770080       | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC | 14 Sep 23 23:04 UTC |
	| start   | -p insufficient-storage-727065 | insufficient-storage-727065 | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-727065 | insufficient-storage-727065 | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC | 14 Sep 23 23:04 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC | 14 Sep 23 23:05 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:05 UTC | 14 Sep 23 23:06 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-836473 sudo    | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-836473 sudo    | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:07 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-595333      | missing-upgrade-595333      | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-595333      | missing-upgrade-595333      | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	| start   | -p stopped-upgrade-686061      | stopped-upgrade-686061      | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-686061      | stopped-upgrade-686061      | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	| start   | -p running-upgrade-629800      | running-upgrade-629800      | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-629800      | running-upgrade-629800      | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC | 14 Sep 23 23:10 UTC |
	| start   | -p pause-188837 --memory=2048  | pause-188837                | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC | 14 Sep 23 23:11 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-188837                | pause-188837                | jenkins | v1.31.2 | 14 Sep 23 23:11 UTC | 14 Sep 23 23:12 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 23:11:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:11:04.547666 2974444 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:11:04.547907 2974444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:11:04.547934 2974444 out.go:309] Setting ErrFile to fd 2...
	I0914 23:11:04.547953 2974444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:11:04.548248 2974444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:11:04.548690 2974444 out.go:303] Setting JSON to false
	I0914 23:11:04.549834 2974444 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82409,"bootTime":1694650655,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:11:04.549976 2974444 start.go:138] virtualization:  
	I0914 23:11:04.553370 2974444 out.go:177] * [pause-188837] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:11:04.555446 2974444 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:11:04.557526 2974444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:11:04.555598 2974444 notify.go:220] Checking for updates...
	I0914 23:11:04.561837 2974444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:11:04.563846 2974444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:11:04.566121 2974444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:11:04.568145 2974444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:11:04.571124 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:11:04.571720 2974444 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:11:04.601368 2974444 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:11:04.601480 2974444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:11:04.685703 2974444 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:11:04.674761435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:11:04.685839 2974444 docker.go:294] overlay module found
	I0914 23:11:04.688888 2974444 out.go:177] * Using the docker driver based on existing profile
	I0914 23:11:04.691019 2974444 start.go:298] selected driver: docker
	I0914 23:11:04.691038 2974444 start.go:902] validating driver "docker" against &{Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:04.691177 2974444 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:11:04.691279 2974444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:11:04.759926 2974444 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:11:04.750066072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:11:04.760347 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:11:04.760364 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:11:04.760375 2974444 start_flags.go:321] config:
	{Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:04.762780 2974444 out.go:177] * Starting control plane node pause-188837 in cluster pause-188837
	I0914 23:11:04.764679 2974444 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:11:04.766482 2974444 out.go:177] * Pulling base image ...
	I0914 23:10:59.908322 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:10:59.908358 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:02.464141 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:02.464618 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:02.464665 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:02.464729 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:02.518491 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:02.518513 2959146 cri.go:89] found id: ""
	I0914 23:11:02.518522 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:02.518576 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:02.523207 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:02.523279 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:02.567433 2959146 cri.go:89] found id: ""
	I0914 23:11:02.567459 2959146 logs.go:284] 0 containers: []
	W0914 23:11:02.567468 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:02.567474 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:02.568216 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:02.612688 2959146 cri.go:89] found id: ""
	I0914 23:11:02.612709 2959146 logs.go:284] 0 containers: []
	W0914 23:11:02.612718 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:02.612727 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:02.612784 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:02.663173 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:02.663197 2959146 cri.go:89] found id: ""
	I0914 23:11:02.663205 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:02.663258 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:02.667785 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:02.667853 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:02.707964 2959146 cri.go:89] found id: ""
	I0914 23:11:02.707987 2959146 logs.go:284] 0 containers: []
	W0914 23:11:02.707996 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:02.708003 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:02.708064 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:02.750716 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:02.750788 2959146 cri.go:89] found id: ""
	I0914 23:11:02.750804 2959146 logs.go:284] 1 containers: [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:02.750861 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:02.755252 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:02.755326 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:02.802360 2959146 cri.go:89] found id: ""
	I0914 23:11:02.802382 2959146 logs.go:284] 0 containers: []
	W0914 23:11:02.802391 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:02.802397 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:02.802453 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:02.850480 2959146 cri.go:89] found id: ""
	I0914 23:11:02.850545 2959146 logs.go:284] 0 containers: []
	W0914 23:11:02.850558 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:02.850569 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:02.850584 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:02.892793 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:02.892822 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:02.942040 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:02.942072 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:02.989944 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:02.989972 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:03.120699 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:03.120734 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:03.145373 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:03.145404 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:03.225200 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:03.225259 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:03.225278 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:03.281633 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:03.281660 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:04.768513 2974444 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:11:04.768569 2974444 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 23:11:04.768594 2974444 cache.go:57] Caching tarball of preloaded images
	I0914 23:11:04.768603 2974444 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 23:11:04.768683 2974444 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 23:11:04.768696 2974444 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 23:11:04.768830 2974444 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/config.json ...
	I0914 23:11:04.787734 2974444 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 23:11:04.787758 2974444 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 23:11:04.787784 2974444 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:11:04.787815 2974444 start.go:365] acquiring machines lock for pause-188837: {Name:mka063723d4b6700976ea6407ac3c1ec17d43a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:11:04.787893 2974444 start.go:369] acquired machines lock for "pause-188837" in 50.363µs
	I0914 23:11:04.787917 2974444 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:11:04.787928 2974444 fix.go:54] fixHost starting: 
	I0914 23:11:04.788208 2974444 cli_runner.go:164] Run: docker container inspect pause-188837 --format={{.State.Status}}
	I0914 23:11:04.806509 2974444 fix.go:102] recreateIfNeeded on pause-188837: state=Running err=<nil>
	W0914 23:11:04.806565 2974444 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:11:04.808663 2974444 out.go:177] * Updating the running docker "pause-188837" container ...
	I0914 23:11:04.810732 2974444 machine.go:88] provisioning docker machine ...
	I0914 23:11:04.810776 2974444 ubuntu.go:169] provisioning hostname "pause-188837"
	I0914 23:11:04.810849 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:04.828956 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:04.829381 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:04.829394 2974444 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-188837 && echo "pause-188837" | sudo tee /etc/hostname
	I0914 23:11:04.991112 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-188837
	
	I0914 23:11:04.991222 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.011452 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:05.011866 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:05.011891 2974444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-188837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-188837/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-188837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 23:11:05.157746 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 23:11:05.157775 2974444 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17243-2840729/.minikube CaCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17243-2840729/.minikube}
	I0914 23:11:05.157794 2974444 ubuntu.go:177] setting up certificates
	I0914 23:11:05.157804 2974444 provision.go:83] configureAuth start
	I0914 23:11:05.157867 2974444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188837
	I0914 23:11:05.176416 2974444 provision.go:138] copyHostCerts
	I0914 23:11:05.176488 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem, removing ...
	I0914 23:11:05.176641 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem
	I0914 23:11:05.176728 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.pem (1078 bytes)
	I0914 23:11:05.176862 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem, removing ...
	I0914 23:11:05.176876 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem
	I0914 23:11:05.176908 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/cert.pem (1123 bytes)
	I0914 23:11:05.176977 2974444 exec_runner.go:144] found /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem, removing ...
	I0914 23:11:05.176986 2974444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem
	I0914 23:11:05.177013 2974444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17243-2840729/.minikube/key.pem (1675 bytes)
	I0914 23:11:05.177064 2974444 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem org=jenkins.pause-188837 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-188837]
	I0914 23:11:05.660441 2974444 provision.go:172] copyRemoteCerts
	I0914 23:11:05.660532 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 23:11:05.660582 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.678368 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:05.783074 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 23:11:05.811415 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 23:11:05.840113 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 23:11:05.868007 2974444 provision.go:86] duration metric: configureAuth took 710.187904ms
	I0914 23:11:05.868031 2974444 ubuntu.go:193] setting minikube options for container-runtime
	I0914 23:11:05.868244 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:11:05.868349 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:05.887485 2974444 main.go:141] libmachine: Using SSH client type: native
	I0914 23:11:05.887887 2974444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 36579 <nil> <nil>}
	I0914 23:11:05.887902 2974444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 23:11:05.876912 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:05.877279 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:05.877334 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:05.877401 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:05.944115 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:05.944139 2959146 cri.go:89] found id: ""
	I0914 23:11:05.944147 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:05.944200 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:05.948823 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:05.948892 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:06.000118 2959146 cri.go:89] found id: ""
	I0914 23:11:06.000143 2959146 logs.go:284] 0 containers: []
	W0914 23:11:06.000152 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:06.000159 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:06.000218 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:06.048922 2959146 cri.go:89] found id: ""
	I0914 23:11:06.048995 2959146 logs.go:284] 0 containers: []
	W0914 23:11:06.049017 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:06.049043 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:06.049132 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:06.138993 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:06.139012 2959146 cri.go:89] found id: ""
	I0914 23:11:06.139020 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:06.139078 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:06.144349 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:06.144430 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:06.196603 2959146 cri.go:89] found id: ""
	I0914 23:11:06.196625 2959146 logs.go:284] 0 containers: []
	W0914 23:11:06.196634 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:06.196641 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:06.196701 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:06.250183 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:06.250245 2959146 cri.go:89] found id: ""
	I0914 23:11:06.250268 2959146 logs.go:284] 1 containers: [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:06.250362 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:06.254816 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:06.254884 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:06.299223 2959146 cri.go:89] found id: ""
	I0914 23:11:06.299245 2959146 logs.go:284] 0 containers: []
	W0914 23:11:06.299253 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:06.299263 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:06.299323 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:06.344853 2959146 cri.go:89] found id: ""
	I0914 23:11:06.344877 2959146 logs.go:284] 0 containers: []
	W0914 23:11:06.344886 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:06.344896 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:06.344908 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:06.420897 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:06.420918 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:06.420931 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:06.464926 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:06.464998 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:06.573783 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:06.573819 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:06.616703 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:06.616732 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:06.663790 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:06.663828 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:06.709494 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:06.709526 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:06.829996 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:06.830036 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:09.354006 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:09.354399 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:09.354442 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:09.354496 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:09.394045 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:09.394069 2959146 cri.go:89] found id: ""
	I0914 23:11:09.394078 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:09.394136 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:09.398901 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:09.398975 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:09.440375 2959146 cri.go:89] found id: ""
	I0914 23:11:09.440401 2959146 logs.go:284] 0 containers: []
	W0914 23:11:09.440409 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:09.440416 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:09.440478 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:09.481064 2959146 cri.go:89] found id: ""
	I0914 23:11:09.481085 2959146 logs.go:284] 0 containers: []
	W0914 23:11:09.481093 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:09.481100 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:09.481160 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:09.521789 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:09.521867 2959146 cri.go:89] found id: ""
	I0914 23:11:09.521891 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:09.521963 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:09.526211 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:09.526277 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:09.569125 2959146 cri.go:89] found id: ""
	I0914 23:11:09.569156 2959146 logs.go:284] 0 containers: []
	W0914 23:11:09.569165 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:09.569171 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:09.569235 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:09.610947 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:09.610970 2959146 cri.go:89] found id: ""
	I0914 23:11:09.610978 2959146 logs.go:284] 1 containers: [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:09.611033 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:09.615324 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:09.615397 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:09.655642 2959146 cri.go:89] found id: ""
	I0914 23:11:09.655671 2959146 logs.go:284] 0 containers: []
	W0914 23:11:09.655680 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:09.655686 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:09.655756 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:09.706305 2959146 cri.go:89] found id: ""
	I0914 23:11:09.706371 2959146 logs.go:284] 0 containers: []
	W0914 23:11:09.706394 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:09.706419 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:09.706461 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:09.767395 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:09.767421 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:11.335581 2974444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 23:11:11.335605 2974444 machine.go:91] provisioned docker machine in 6.524857182s
	I0914 23:11:11.335616 2974444 start.go:300] post-start starting for "pause-188837" (driver="docker")
	I0914 23:11:11.335626 2974444 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 23:11:11.335688 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 23:11:11.335737 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.365208 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:11.563934 2974444 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 23:11:11.592618 2974444 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 23:11:11.592655 2974444 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 23:11:11.592666 2974444 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 23:11:11.592674 2974444 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0914 23:11:11.592685 2974444 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/addons for local assets ...
	I0914 23:11:11.592745 2974444 filesync.go:126] Scanning /home/jenkins/minikube-integration/17243-2840729/.minikube/files for local assets ...
	I0914 23:11:11.592837 2974444 filesync.go:149] local asset: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem -> 28461092.pem in /etc/ssl/certs
	I0914 23:11:11.592947 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 23:11:11.627618 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:11:11.697755 2974444 start.go:303] post-start completed in 362.123663ms
	I0914 23:11:11.697844 2974444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 23:11:11.697897 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.733844 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:11.920063 2974444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 23:11:11.932169 2974444 fix.go:56] fixHost completed within 7.144232092s
	I0914 23:11:11.932191 2974444 start.go:83] releasing machines lock for "pause-188837", held for 7.144286402s
	I0914 23:11:11.932267 2974444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-188837
	I0914 23:11:11.961776 2974444 ssh_runner.go:195] Run: cat /version.json
	I0914 23:11:11.961844 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:11.961784 2974444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 23:11:11.961937 2974444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-188837
	I0914 23:11:12.031313 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:12.032691 2974444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36579 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/pause-188837/id_rsa Username:docker}
	I0914 23:11:12.329048 2974444 ssh_runner.go:195] Run: systemctl --version
	I0914 23:11:12.344393 2974444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 23:11:12.574254 2974444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 23:11:12.594601 2974444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:11:12.622917 2974444 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 23:11:12.623060 2974444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 23:11:12.650952 2974444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 23:11:12.651020 2974444 start.go:469] detecting cgroup driver to use...
	I0914 23:11:12.651065 2974444 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0914 23:11:12.651150 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 23:11:12.682304 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 23:11:12.709810 2974444 docker.go:196] disabling cri-docker service (if available) ...
	I0914 23:11:12.709918 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 23:11:12.752989 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 23:11:12.778991 2974444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 23:11:13.053546 2974444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 23:11:13.323443 2974444 docker.go:212] disabling docker service ...
	I0914 23:11:13.323575 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 23:11:13.384877 2974444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 23:11:13.420070 2974444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 23:11:13.720055 2974444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 23:11:14.062535 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 23:11:14.095915 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 23:11:14.172141 2974444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0914 23:11:14.172205 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.212705 2974444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 23:11:14.212778 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.261056 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.327733 2974444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 23:11:14.378301 2974444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 23:11:14.410989 2974444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 23:11:14.443201 2974444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 23:11:14.467790 2974444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 23:11:09.861920 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:09.861953 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:09.904847 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:09.904913 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:09.954409 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:09.954443 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:09.999833 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:09.999864 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:10.118140 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:10.118175 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:10.141567 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:10.141599 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:10.222204 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:12.722890 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:12.723216 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:12.723252 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:12.723302 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:12.804110 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:12.804129 2959146 cri.go:89] found id: ""
	I0914 23:11:12.804136 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:12.804190 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:12.812112 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:12.812181 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:12.887567 2959146 cri.go:89] found id: ""
	I0914 23:11:12.887588 2959146 logs.go:284] 0 containers: []
	W0914 23:11:12.887597 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:12.887604 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:12.887658 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:12.957646 2959146 cri.go:89] found id: ""
	I0914 23:11:12.957667 2959146 logs.go:284] 0 containers: []
	W0914 23:11:12.957676 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:12.957682 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:12.957739 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:13.056633 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:13.056658 2959146 cri.go:89] found id: ""
	I0914 23:11:13.056667 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:13.056719 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:13.061591 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:13.061662 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:13.123738 2959146 cri.go:89] found id: ""
	I0914 23:11:13.123763 2959146 logs.go:284] 0 containers: []
	W0914 23:11:13.123772 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:13.123778 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:13.123836 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:13.203133 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:13.203153 2959146 cri.go:89] found id: ""
	I0914 23:11:13.203160 2959146 logs.go:284] 1 containers: [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:13.203219 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:13.208142 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:13.208210 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:13.284264 2959146 cri.go:89] found id: ""
	I0914 23:11:13.284337 2959146 logs.go:284] 0 containers: []
	W0914 23:11:13.284359 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:13.284380 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:13.284472 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:13.354738 2959146 cri.go:89] found id: ""
	I0914 23:11:13.354760 2959146 logs.go:284] 0 containers: []
	W0914 23:11:13.354768 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:13.354777 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:13.354789 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:13.452903 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:13.452979 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:13.611685 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:13.611756 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:13.635802 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:13.636009 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:13.813877 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:13.813939 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:13.813964 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:13.881682 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:13.881751 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:14.060069 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:14.060142 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:14.155751 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:14.155777 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:14.747234 2974444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 23:11:16.740055 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:16.740445 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:16.740484 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:16.740576 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:16.797402 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:16.797434 2959146 cri.go:89] found id: ""
	I0914 23:11:16.797443 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:16.797503 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:16.802781 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:16.802856 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:16.852238 2959146 cri.go:89] found id: ""
	I0914 23:11:16.852265 2959146 logs.go:284] 0 containers: []
	W0914 23:11:16.852283 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:16.852290 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:16.852350 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:16.912145 2959146 cri.go:89] found id: ""
	I0914 23:11:16.912172 2959146 logs.go:284] 0 containers: []
	W0914 23:11:16.912181 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:16.912187 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:16.912252 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:16.983225 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:16.983247 2959146 cri.go:89] found id: ""
	I0914 23:11:16.983256 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:16.983312 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:16.988382 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:16.988453 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:17.040271 2959146 cri.go:89] found id: ""
	I0914 23:11:17.040299 2959146 logs.go:284] 0 containers: []
	W0914 23:11:17.040308 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:17.040314 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:17.040370 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:17.093140 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:17.093208 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:17.093223 2959146 cri.go:89] found id: ""
	I0914 23:11:17.093232 2959146 logs.go:284] 2 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:17.093305 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:17.098066 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:17.102517 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:17.102584 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:17.144920 2959146 cri.go:89] found id: ""
	I0914 23:11:17.144943 2959146 logs.go:284] 0 containers: []
	W0914 23:11:17.144951 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:17.144958 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:17.145014 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:17.190580 2959146 cri.go:89] found id: ""
	I0914 23:11:17.190603 2959146 logs.go:284] 0 containers: []
	W0914 23:11:17.190611 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:17.190625 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:17.190637 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:17.265597 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:17.265619 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:17.265637 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:17.308731 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:17.308760 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:17.358092 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:17.358126 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:17.401252 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:17.401280 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:17.445879 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:17.445905 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:17.569812 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:17.569849 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:17.593846 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:17.593878 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:17.656326 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:17.656356 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:22.305411 2974444 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.558107779s)
	I0914 23:11:22.305438 2974444 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 23:11:22.305506 2974444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 23:11:22.311297 2974444 start.go:537] Will wait 60s for crictl version
	I0914 23:11:22.311366 2974444 ssh_runner.go:195] Run: which crictl
	I0914 23:11:22.315744 2974444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 23:11:22.358423 2974444 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 23:11:22.358507 2974444 ssh_runner.go:195] Run: crio --version
	I0914 23:11:22.406220 2974444 ssh_runner.go:195] Run: crio --version
	I0914 23:11:22.462128 2974444 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0914 23:11:22.464122 2974444 cli_runner.go:164] Run: docker network inspect pause-188837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 23:11:22.481350 2974444 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0914 23:11:22.486007 2974444 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:11:22.486075 2974444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:11:22.526083 2974444 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:11:22.526106 2974444 crio.go:415] Images already preloaded, skipping extraction
	I0914 23:11:22.526160 2974444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 23:11:22.565865 2974444 crio.go:496] all images are preloaded for cri-o runtime.
	I0914 23:11:22.565889 2974444 cache_images.go:84] Images are preloaded, skipping loading
	I0914 23:11:22.565965 2974444 ssh_runner.go:195] Run: crio config
	I0914 23:11:22.627062 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:11:22.627086 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:11:22.627111 2974444 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 23:11:22.627130 2974444 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-188837 NodeName:pause-188837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 23:11:22.627275 2974444 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-188837"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 23:11:22.627351 2974444 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-188837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 23:11:22.627422 2974444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 23:11:22.638184 2974444 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 23:11:22.638267 2974444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 23:11:22.648371 2974444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0914 23:11:22.668685 2974444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 23:11:22.689035 2974444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0914 23:11:22.709604 2974444 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0914 23:11:22.714013 2974444 certs.go:56] Setting up /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837 for IP: 192.168.76.2
	I0914 23:11:22.714049 2974444 certs.go:190] acquiring lock for shared ca certs: {Name:mk7b43b7d537d49c569d06654003547535d1ca4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:11:22.714185 2974444 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key
	I0914 23:11:22.714231 2974444 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key
	I0914 23:11:22.714306 2974444 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key
	I0914 23:11:22.714375 2974444 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.key.31bdca25
	I0914 23:11:22.714429 2974444 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.key
	I0914 23:11:22.714546 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem (1338 bytes)
	W0914 23:11:22.714579 2974444 certs.go:433] ignoring /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109_empty.pem, impossibly tiny 0 bytes
	I0914 23:11:22.714591 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 23:11:22.714619 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/ca.pem (1078 bytes)
	I0914 23:11:22.714646 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/cert.pem (1123 bytes)
	I0914 23:11:22.714673 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/certs/key.pem (1675 bytes)
	I0914 23:11:22.714726 2974444 certs.go:437] found cert: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem (1708 bytes)
	I0914 23:11:22.715797 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 23:11:22.747305 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 23:11:22.775146 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 23:11:22.802231 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 23:11:22.829806 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 23:11:22.857834 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 23:11:22.886062 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 23:11:22.913519 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 23:11:22.941281 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/ssl/certs/28461092.pem --> /usr/share/ca-certificates/28461092.pem (1708 bytes)
	I0914 23:11:22.968583 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 23:11:22.996220 2974444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17243-2840729/.minikube/certs/2846109.pem --> /usr/share/ca-certificates/2846109.pem (1338 bytes)
	I0914 23:11:23.023935 2974444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 23:11:23.045041 2974444 ssh_runner.go:195] Run: openssl version
	I0914 23:11:23.052065 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 23:11:23.063557 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.068065 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 22:27 /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.068126 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 23:11:23.076644 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 23:11:23.087423 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2846109.pem && ln -fs /usr/share/ca-certificates/2846109.pem /etc/ssl/certs/2846109.pem"
	I0914 23:11:23.099605 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.104190 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 22:34 /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.104257 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2846109.pem
	I0914 23:11:23.112868 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2846109.pem /etc/ssl/certs/51391683.0"
	I0914 23:11:23.123578 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28461092.pem && ln -fs /usr/share/ca-certificates/28461092.pem /etc/ssl/certs/28461092.pem"
	I0914 23:11:23.135088 2974444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.139680 2974444 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 22:34 /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.139750 2974444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28461092.pem
	I0914 23:11:23.148653 2974444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28461092.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 23:11:23.159472 2974444 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 23:11:23.163804 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 23:11:23.172129 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 23:11:23.180710 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 23:11:23.189162 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 23:11:23.197654 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 23:11:23.206148 2974444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 23:11:23.214421 2974444 kubeadm.go:404] StartCluster: {Name:pause-188837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-188837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:11:23.214540 2974444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 23:11:23.214600 2974444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:11:23.257814 2974444 cri.go:89] found id: "3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	I0914 23:11:23.257883 2974444 cri.go:89] found id: "3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b"
	I0914 23:11:23.257894 2974444 cri.go:89] found id: "bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45"
	I0914 23:11:23.257902 2974444 cri.go:89] found id: "d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678"
	I0914 23:11:23.257906 2974444 cri.go:89] found id: "b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	I0914 23:11:23.257912 2974444 cri.go:89] found id: "1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	I0914 23:11:23.257916 2974444 cri.go:89] found id: "f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197"
	I0914 23:11:23.257921 2974444 cri.go:89] found id: "3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb"
	I0914 23:11:23.257934 2974444 cri.go:89] found id: "c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55"
	I0914 23:11:23.257944 2974444 cri.go:89] found id: "2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868"
	I0914 23:11:23.257948 2974444 cri.go:89] found id: "a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c"
	I0914 23:11:23.257952 2974444 cri.go:89] found id: "a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b"
	I0914 23:11:23.257957 2974444 cri.go:89] found id: "75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a"
	I0914 23:11:23.257967 2974444 cri.go:89] found id: "1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a"
	I0914 23:11:23.257975 2974444 cri.go:89] found id: ""
	I0914 23:11:23.258026 2974444 ssh_runner.go:195] Run: sudo runc list -f json
	I0914 23:11:23.296936 2974444 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a/userdata","rootfs":"/var/lib/containers/storage/overlay/70ebd58a95c7434e2d147538967ad102fb6e66192010f008b9f4debb1b32d68a/merged","created":"2023-09-14T23:10:34.461302205Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b4bfd9d0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b4bfd9d0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.331603852Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9daa06f1bce90ea27262295fdd763f52\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-188837_9daa06f1bce90ea27262295fdd763f52/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/70ebd58a95c7434e2d147538967ad102fb6e66192010f008b9f4debb1b32d68a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/containers/kube-apiserver/64e4b917\",\"readonly\":false,\"propagation\":0,\"selinux_re
label\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-188837","io.k
ubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9daa06f1bce90ea27262295fdd763f52","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"9daa06f1bce90ea27262295fdd763f52","kubernetes.io/config.seen":"2023-09-14T23:10:33.801531701Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336/userdata","rootfs":"/var/lib/containers/storage/overlay/52aa0ded4b862138f24c8066d24452d9951215b3af2382a5d24724cf1990fd0b/merged","created":"2023-09-14T23:11:11.613927439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"68d78db8","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container
.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"68d78db8\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.408097823Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-lprw
g\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b888ea22-8d29-4c36-a973-02cd1262b1ae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-lprwg_b888ea22-8d29-4c36-a973-02cd1262b1ae/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/52aa0ded4b862138f24c8066d24452d9951215b3af2382a5d24724cf1990fd0b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"fa
lse","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/containers/kube-proxy/280f19d9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~projected/kube-api-access-d7stp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-lprwg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b888ea22-8d29-4c36-a973-02cd1262b1ae","kubernetes.io/config.seen":"2023-09-14T23:10:55.368412718Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868/userdata","rootfs":"/var/lib/containers/storage/overlay/22c8603f18526a7027605bff240cabde6868bd0978333972c222d3baa3bc683a/merged","created":"2023-09-14T23:10:56.995413759Z","annotations":{"io.container.manager":"c
ri-o","io.kubernetes.container.hash":"68d78db8","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"68d78db8\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:56.930186483Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259
a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-lprwg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b888ea22-8d29-4c36-a973-02cd1262b1ae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-lprwg_b888ea22-8d29-4c36-a973-02cd1262b1ae/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/22c8603f18526a7027605bff240cabde6868bd0978333972c222d3baa3bc683a/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b","io.kubernetes.cri-o.SandboxName":"k8s
_kube-proxy-lprwg_kube-system_b888ea22-8d29-4c36-a973-02cd1262b1ae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/containers/kube-proxy/3366196d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8
d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b888ea22-8d29-4c36-a973-02cd1262b1ae/volumes/kubernetes.io~projected/kube-api-access-d7stp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-lprwg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b888ea22-8d29-4c36-a973-02cd1262b1ae","kubernetes.io/config.seen":"2023-09-14T23:10:55.368412718Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352/userdata","rootfs":"/var/lib/containers/storage/overlay/6a2219624a8
97ac1b01bf3deb522d9af38aa71aa1c91ce81e63a418dfb9d94b0/merged","created":"2023-09-14T23:11:11.665297124Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4ad3610d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4ad3610d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernet
es.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.566603167Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-fsjl2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"67bad9d6-02e3-402b-b63e-83403a6c00c4\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_coredns-5dd5756b68-fsjl2_67bad9d6-02e3-402b-b63e-83403a6c00c4/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6a2219624a897ac1b01bf3deb522d9af38aa71aa1c91ce81e63a418dfb9d94b0/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/et
c/coredns\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/containers/coredns/3a1aaeed\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~projected/kube-api-access-q2bfz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-fsjl2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.p
od.uid":"67bad9d6-02e3-402b-b63e-83403a6c00c4","kubernetes.io/config.seen":"2023-09-14T23:10:59.342925004Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb/userdata","rootfs":"/var/lib/containers/storage/overlay/bc032752c44f88a9111693661950ebb049ec6208c01f2ad2b51617a384b91220/merged","created":"2023-09-14T23:10:59.814350642Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4ad3610d","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath
":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4ad3610d\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:59.766653882Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa786
27c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-fsjl2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"67bad9d6-02e3-402b-b63e-83403a6c00c4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-fsjl2_67bad9d6-02e3-402b-b63e-83403a6c00c4/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bc032752c44f88a9111693661950ebb049ec6208c01f2ad2b51617a384b91220/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf0309
83c6f1f7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-fsjl2_kube-system_67bad9d6-02e3-402b-b63e-83403a6c00c4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/containers/coredns/e12d93bf\",\"readonly\":false,\"propagation\
":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/67bad9d6-02e3-402b-b63e-83403a6c00c4/volumes/kubernetes.io~projected/kube-api-access-q2bfz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-fsjl2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"67bad9d6-02e3-402b-b63e-83403a6c00c4","kubernetes.io/config.seen":"2023-09-14T23:10:59.342925004Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b/userdata","rootfs":"/var/lib/containers/storage/overlay/f681e4c26c0d53de438ce4443108563066d04678d8b95042f1e7c8f2883a7283/merged","created":"2023-09-14T23:11:11.688
966542Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3673094b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3673094b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.539627207Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.Imag
eRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"19dcc362ef0990caebeed73c36545e51\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-188837_19dcc362ef0990caebeed73c36545e51/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f681e4c26c0d53de438ce4443108563066d04678d8b95042f1e7c8f2883a7283/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197","io.kubernetes.cri-o
.SandboxName":"k8s_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/containers/etcd/ddfa2bff\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-188837","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"19dcc362ef0990caebeed73c36545e51","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"19dcc362ef0990caebeed73c36545e51","kubernetes.io/config.seen":"2023-09-14T23:10:33.801525646Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a/userdata","rootfs":"/var/lib/containers/storage/overlay/b8cec56e9ebe7d589282b9c416e8ff9dd0ee1735de27e0dbdf8db45f0fc0dc08/merged","created":"2023-09-14T23:10:34.450528796Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.containe
r.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.342932748Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-s
cheduler-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0dd7249489e06a79323b7c83c9463f99\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-188837_0dd7249489e06a79323b7c83c9463f99/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8cec56e9ebe7d589282b9c416e8ff9dd0ee1735de27e0dbdf8db45f0fc0dc08/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.SeccompProfilePath"
:"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/containers/kube-scheduler/42f2c69d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.hash":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.seen":"2023-09-14T2
3:10:33.801534310Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c/userdata","rootfs":"/var/lib/containers/storage/overlay/5a0758e92f8398386486b4987e5a131680da821d4f8959a32ff098de12487e35/merged","created":"2023-09-14T23:10:34.470518676Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kuberne
tes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:34.370627075Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e1a7fbd0dd15eaf0a22aeb6173189\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-188837_560e1a7fbd0dd15eaf0a22aeb6173189/kube-controller-manager/0.log","i
o.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5a0758e92f8398386486b4987e5a131680da821d4f8959a32ff098de12487e35/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":
true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/containers/kube-controller-manager/ff149f05\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs
\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.hash":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.seen":"2023-09-14T23:10:33.801533211Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a11c
c7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b/userdata","rootfs":"/var/lib/containers/storage/overlay/e579ccb70c78334c31784b17607543010e536ecd07f0a7fdf0357a4bd33f7e28/merged","created":"2023-09-14T23:10:34.463545092Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3673094b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3673094b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.C
reated":"2023-09-14T23:10:34.359410006Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"19dcc362ef0990caebeed73c36545e51\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-188837_19dcc362ef0990caebeed73c36545e51/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e579ccb70c78334c31784b17607543010e536ecd07f0a7fdf0357a4bd33f7e28/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-contain
ers/08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"08ee086b60e39861e1ce0a94ebc01a8970091f0deec9eba68df06d4b4c8d1197","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-188837_kube-system_19dcc362ef0990caebeed73c36545e51_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/19dcc362ef0990caebeed73c36545e51/containers/etcd/190c0400\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"19dcc362ef0990caebeed73c36545e51","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"19dcc362ef0990caebeed73c36545e51","kubernetes.io/config.seen":"2023-09-14T23:10:33.801525646Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b/userdata","rootfs":"/var/lib/containers/storage/overlay/5c86bcdb682e3aba66a2887a379fce94fb8f5009a413ddda449f8780e2df1a17/merged","created":"2023-09-14T23:11:11.6510
92931Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9867e7ac","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9867e7ac\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.449186964Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","i
o.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw9vg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw9vg_fe2fe062-01ec-4c26-b6d1-c181f2d685ea/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5c86bcdb682e3aba66a2887a379fce94fb8f5009a413ddda449f8780e2df1a17/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4efca5701c8659f9d6d0ed03cc5a55bcf0d
e0b2a7eef3ffb2e26abcd585b7bcd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/containers/kindnet-cni/2024cb12\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/et
c/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/volumes/kubernetes.io~projected/kube-api-access-m5bj5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw9vg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fe2fe062-01ec-4c26-b6d1-c181f2d685ea","kubernetes.io/config.seen":"2023-09-14T23:10:55.366916621Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45/userdata","rootfs":"/var/lib/containers/storage/overlay/f145c2a1a39383851b7173e92d1a0ae7c99102
b72466daa33bea293db88d8d83/merged","created":"2023-09-14T23:11:11.631061164Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.492064322Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kuber
netes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0dd7249489e06a79323b7c83c9463f99\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-188837_0dd7249489e06a79323b7c83c9463f99/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f145c2a1a39383851b7173e92d1a0ae7c99102b72466daa33bea293db88d8d83/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/81f238f5cfc9e6da272d347d17be3ba3db4bd02285a3136
5712b48f6dc3d2bfa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"81f238f5cfc9e6da272d347d17be3ba3db4bd02285a31365712b48f6dc3d2bfa","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-188837_kube-system_0dd7249489e06a79323b7c83c9463f99_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0dd7249489e06a79323b7c83c9463f99/containers/kube-scheduler/2ece41b1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.p
od.name":"kube-scheduler-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.hash":"0dd7249489e06a79323b7c83c9463f99","kubernetes.io/config.seen":"2023-09-14T23:10:33.801534310Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55/userdata","rootfs":"/var/lib/containers/storage/overlay/50cc30cf6f991e4650ff1db34e61a3373d1122070c3ee9717a04ef7b56294690/merged","created":"2023-09-14T23:10:58.592604365Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9867e7ac","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/terminatio
n-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9867e7ac\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:10:58.539167702Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw
9vg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw9vg_fe2fe062-01ec-4c26-b6d1-c181f2d685ea/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/50cc30cf6f991e4650ff1db34e61a3373d1122070c3ee9717a04ef7b56294690/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw9vg_kube-system_fe2fe062-01ec-4c26-b6d1-c181f2d685ea_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernete
s.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-01ec-4c26-b6d1-c181f2d685ea/containers/kindnet-cni/fbd9b8c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fe2fe062-0
1ec-4c26-b6d1-c181f2d685ea/volumes/kubernetes.io~projected/kube-api-access-m5bj5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw9vg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fe2fe062-01ec-4c26-b6d1-c181f2d685ea","kubernetes.io/config.seen":"2023-09-14T23:10:55.366916621Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678/userdata","rootfs":"/var/lib/containers/storage/overlay/64d13becf5862b04293b7a84c46254dda47408bfba2235c88c3663de009504cd/merged","created":"2023-09-14T23:11:11.640199218Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.ku
bernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.454570083Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.co
ntainer.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e1a7fbd0dd15eaf0a22aeb6173189\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-188837_560e1a7fbd0dd15eaf0a22aeb6173189/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/64d13becf5862b04293b7a84c46254dda47408bfba2235c88c3663de009504cd/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a0d11b5b50a0125aed66b7931b491a5b26575440230fe0ff66e008827cdf8996","io.kuber
netes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-188837_kube-system_560e1a7fbd0dd15eaf0a22aeb6173189_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/containers/kube-controller-manager/80d214d3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e1a7fbd0dd15eaf0a22aeb6173189/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubern
etes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.ui
d":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.hash":"560e1a7fbd0dd15eaf0a22aeb6173189","kubernetes.io/config.seen":"2023-09-14T23:10:33.801533211Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197/userdata","rootfs":"/var/lib/containers/storage/overlay/08bc0324fcd6582549b7b04fe9e9074dc9a2b0729367aa2e7ca6f6ed40b39e95/merged","created":"2023-09-14T23:11:11.623486334Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b4bfd9d0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b4bfd9d0\",\"
io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-14T23:11:11.383307018Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-188837\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9daa06f1bce90ea27262295fdd763f52\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-syst
em_kube-apiserver-pause-188837_9daa06f1bce90ea27262295fdd763f52/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/08bc0324fcd6582549b7b04fe9e9074dc9a2b0729367aa2e7ca6f6ed40b39e95/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a5fbc85f793393abbb3f5762b72f835014218bc8cb33ad2b3adf4eec5cee35fd","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-188837_kube-system_9daa06f1bce90ea27262295fdd763f52_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/term
ination-log\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/containers/kube-apiserver/3df15ec9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9daa06f1bce90ea27262295fdd763f52/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"hos
t_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-188837","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9daa06f1bce90ea27262295fdd763f52","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"9daa06f1bce90ea27262295fdd763f52","kubernetes.io/config.seen":"2023-09-14T23:10:33.801531701Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0914 23:11:23.297867 2974444 cri.go:126] list returned 14 containers
	I0914 23:11:23.297882 2974444 cri.go:129] container: {ID:1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a Status:stopped}
	I0914 23:11:23.297898 2974444 cri.go:135] skipping {1856cb02fae9437a99a0b362295ee3197a1ec49123b0d2c8de9cc17137e0233a stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297908 2974444 cri.go:129] container: {ID:1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 Status:stopped}
	I0914 23:11:23.297921 2974444 cri.go:135] skipping {1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297930 2974444 cri.go:129] container: {ID:2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868 Status:stopped}
	I0914 23:11:23.297939 2974444 cri.go:135] skipping {2871ed46af76b9eb0ff9f55c298e927ed32067ed58285b1f45f29c44ffda1868 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297948 2974444 cri.go:129] container: {ID:3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 Status:stopped}
	I0914 23:11:23.297955 2974444 cri.go:135] skipping {3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297961 2974444 cri.go:129] container: {ID:3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb Status:stopped}
	I0914 23:11:23.297971 2974444 cri.go:135] skipping {3b993f4a4efd1b16d3a6e63807f6858e16f35f158bca8d4683eee16ab99e85fb stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297977 2974444 cri.go:129] container: {ID:3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b Status:stopped}
	I0914 23:11:23.297986 2974444 cri.go:135] skipping {3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.297993 2974444 cri.go:129] container: {ID:75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a Status:stopped}
	I0914 23:11:23.297999 2974444 cri.go:135] skipping {75d3d548281e02fd6383390a0e86f5df044efa5c5d00a8e6736524f3b53e876a stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298008 2974444 cri.go:129] container: {ID:a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c Status:stopped}
	I0914 23:11:23.298018 2974444 cri.go:135] skipping {a0c959683b247e8f18bad2254846f7f5800d6d75983dba1d009db272306fe74c stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298027 2974444 cri.go:129] container: {ID:a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b Status:stopped}
	I0914 23:11:23.298036 2974444 cri.go:135] skipping {a11cc7c0c39389608d37cf9b9d99c9e773dd54af5e8e354ba84b9d667da0e11b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298042 2974444 cri.go:129] container: {ID:b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b Status:stopped}
	I0914 23:11:23.298049 2974444 cri.go:135] skipping {b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298058 2974444 cri.go:129] container: {ID:bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 Status:stopped}
	I0914 23:11:23.298064 2974444 cri.go:135] skipping {bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298072 2974444 cri.go:129] container: {ID:c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55 Status:stopped}
	I0914 23:11:23.298079 2974444 cri.go:135] skipping {c4d6d8178faf698da5ef35226961a972825968c13748ebe40df01345fd299a55 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298084 2974444 cri.go:129] container: {ID:d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 Status:stopped}
	I0914 23:11:23.298091 2974444 cri.go:135] skipping {d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298100 2974444 cri.go:129] container: {ID:f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197 Status:stopped}
	I0914 23:11:23.298107 2974444 cri.go:135] skipping {f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197 stopped}: state = "stopped", want "paused"
	I0914 23:11:23.298166 2974444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 23:11:23.308773 2974444 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 23:11:23.308831 2974444 kubeadm.go:636] restartCluster start
	I0914 23:11:23.308913 2974444 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 23:11:23.319081 2974444 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.319763 2974444 kubeconfig.go:92] found "pause-188837" server: "https://192.168.76.2:8443"
	I0914 23:11:23.320823 2974444 kapi.go:59] client config for pause-188837: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:11:23.322554 2974444 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 23:11:23.334356 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.334473 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.346384 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.346441 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.346495 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.358010 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:23.858758 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:23.858854 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:23.872122 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:24.358648 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:24.358736 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:24.370929 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:20.272781 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:20.273137 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:20.273175 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:20.273252 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:20.316252 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:20.316276 2959146 cri.go:89] found id: ""
	I0914 23:11:20.316297 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:20.316415 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:20.320975 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:20.321066 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:20.360078 2959146 cri.go:89] found id: ""
	I0914 23:11:20.360099 2959146 logs.go:284] 0 containers: []
	W0914 23:11:20.360108 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:20.360114 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:20.360203 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:20.399736 2959146 cri.go:89] found id: ""
	I0914 23:11:20.399760 2959146 logs.go:284] 0 containers: []
	W0914 23:11:20.399769 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:20.399776 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:20.399858 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:20.441029 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:20.441056 2959146 cri.go:89] found id: ""
	I0914 23:11:20.441065 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:20.441165 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:20.445508 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:20.445576 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:20.486500 2959146 cri.go:89] found id: ""
	I0914 23:11:20.486571 2959146 logs.go:284] 0 containers: []
	W0914 23:11:20.486593 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:20.486608 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:20.486680 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:20.535538 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:20.535558 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:20.535567 2959146 cri.go:89] found id: ""
	I0914 23:11:20.535574 2959146 logs.go:284] 2 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:20.535681 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:20.539962 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:20.543851 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:20.543912 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:20.583371 2959146 cri.go:89] found id: ""
	I0914 23:11:20.583395 2959146 logs.go:284] 0 containers: []
	W0914 23:11:20.583411 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:20.583418 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:20.583476 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:20.625604 2959146 cri.go:89] found id: ""
	I0914 23:11:20.625674 2959146 logs.go:284] 0 containers: []
	W0914 23:11:20.625689 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:20.625704 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:20.625719 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:20.703074 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:20.703094 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:20.703106 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:20.745987 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:20.746062 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:20.785053 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:20.785082 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:20.882506 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:20.882541 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:20.934366 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:20.934398 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:20.977464 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:20.977490 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:21.098451 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:21.098485 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:21.122410 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:21.122441 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:23.670965 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:23.671391 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:23.671438 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:23.671496 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:23.714987 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:23.715011 2959146 cri.go:89] found id: ""
	I0914 23:11:23.715020 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:23.715076 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:23.719665 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:23.719741 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:23.762592 2959146 cri.go:89] found id: ""
	I0914 23:11:23.762614 2959146 logs.go:284] 0 containers: []
	W0914 23:11:23.762623 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:23.762630 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:23.762695 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:23.802106 2959146 cri.go:89] found id: ""
	I0914 23:11:23.802130 2959146 logs.go:284] 0 containers: []
	W0914 23:11:23.802140 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:23.802147 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:23.802202 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:23.841390 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:23.841414 2959146 cri.go:89] found id: ""
	I0914 23:11:23.841422 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:23.841476 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:23.845944 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:23.846058 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:23.894252 2959146 cri.go:89] found id: ""
	I0914 23:11:23.894274 2959146 logs.go:284] 0 containers: []
	W0914 23:11:23.894283 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:23.894289 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:23.894348 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:23.943884 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:23.943904 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:23.943909 2959146 cri.go:89] found id: ""
	I0914 23:11:23.943917 2959146 logs.go:284] 2 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:23.944013 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:23.948332 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:23.952667 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:23.952740 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:23.993045 2959146 cri.go:89] found id: ""
	I0914 23:11:23.993109 2959146 logs.go:284] 0 containers: []
	W0914 23:11:23.993131 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:23.993153 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:23.993222 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:24.034048 2959146 cri.go:89] found id: ""
	I0914 23:11:24.034070 2959146 logs.go:284] 0 containers: []
	W0914 23:11:24.034079 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:24.034093 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:24.034104 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:24.156368 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:24.156404 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:24.227547 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:24.227608 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:24.227634 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:24.273849 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:24.273878 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:24.388053 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:24.388088 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:24.439339 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:24.439373 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:24.499386 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:24.499418 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:24.523511 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:24.523542 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:24.567677 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:24.567704 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:24.858755 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:24.858851 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:24.871166 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:25.358845 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:25.358951 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:25.370830 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:25.858193 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:25.858276 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:25.870460 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:26.359101 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:26.359183 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:26.371106 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:26.858172 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:26.858258 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:26.872633 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:27.358177 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:27.358256 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:27.381928 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:27.858170 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:27.858251 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:27.876932 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:28.358332 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:28.358412 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:28.370781 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:28.858203 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:28.858293 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:28.870063 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:29.358212 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:29.358299 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:29.370269 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:27.109960 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:27.110454 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:27.110530 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:27.110610 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:27.177483 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:27.177502 2959146 cri.go:89] found id: ""
	I0914 23:11:27.177510 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:27.177570 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:27.182250 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:27.182305 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:27.229719 2959146 cri.go:89] found id: ""
	I0914 23:11:27.229739 2959146 logs.go:284] 0 containers: []
	W0914 23:11:27.229748 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:27.229754 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:27.229808 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:27.283543 2959146 cri.go:89] found id: ""
	I0914 23:11:27.283564 2959146 logs.go:284] 0 containers: []
	W0914 23:11:27.283572 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:27.283587 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:27.283642 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:27.333900 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:27.333919 2959146 cri.go:89] found id: ""
	I0914 23:11:27.333927 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:27.333981 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:27.339226 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:27.339291 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:27.399021 2959146 cri.go:89] found id: ""
	I0914 23:11:27.399041 2959146 logs.go:284] 0 containers: []
	W0914 23:11:27.399050 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:27.399057 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:27.399110 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:27.453323 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:27.453411 2959146 cri.go:89] found id: "7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	I0914 23:11:27.453431 2959146 cri.go:89] found id: ""
	I0914 23:11:27.453469 2959146 logs.go:284] 2 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]
	I0914 23:11:27.453561 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:27.459328 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:27.464150 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:27.464215 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:27.547564 2959146 cri.go:89] found id: ""
	I0914 23:11:27.547584 2959146 logs.go:284] 0 containers: []
	W0914 23:11:27.547592 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:27.547598 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:27.547660 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:27.616180 2959146 cri.go:89] found id: ""
	I0914 23:11:27.616201 2959146 logs.go:284] 0 containers: []
	W0914 23:11:27.616208 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:27.616221 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:27.616237 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:27.752741 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:27.752819 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:27.850680 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:27.850705 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:27.850725 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:27.943724 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:27.943872 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:28.074466 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:28.074506 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:28.117878 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:28.117947 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:28.164470 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:28.164525 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:28.188368 2959146 logs.go:123] Gathering logs for kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6] ...
	I0914 23:11:28.188401 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	W0914 23:11:28.234517 2959146 logs.go:130] failed kube-controller-manager [7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6": Process exited with status 1
	stdout:
	
	stderr:
	E0914 23:11:28.231298    4782 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6\": container with ID starting with 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6 not found: ID does not exist" containerID="7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	time="2023-09-14T23:11:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6\": container with ID starting with 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6 not found: ID does not exist"
	 output: 
	** stderr ** 
	E0914 23:11:28.231298    4782 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6\": container with ID starting with 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6 not found: ID does not exist" containerID="7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6"
	time="2023-09-14T23:11:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6\": container with ID starting with 7e2c9020b0039b5fcc6ef907ab354f58a678d0ace8c0480ce8b1a2072ccfa3c6 not found: ID does not exist"
	
	** /stderr **
	I0914 23:11:28.234542 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:28.234554 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:29.859002 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:29.859084 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:29.871448 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:30.359101 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:30.359189 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:30.371382 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:30.858572 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:30.858638 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:30.893313 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:31.358632 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:31.358722 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:31.391068 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:31.858658 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:31.858740 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:31.871191 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:32.358835 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:32.358922 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:32.371681 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:32.858198 2974444 api_server.go:166] Checking apiserver status ...
	I0914 23:11:32.858306 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 23:11:32.870241 2974444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:33.334719 2974444 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 23:11:33.334749 2974444 kubeadm.go:1128] stopping kube-system containers ...
	I0914 23:11:33.334762 2974444 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 23:11:33.334831 2974444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 23:11:33.378497 2974444 cri.go:89] found id: "af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd"
	I0914 23:11:33.378516 2974444 cri.go:89] found id: "cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092"
	I0914 23:11:33.378522 2974444 cri.go:89] found id: "7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648"
	I0914 23:11:33.378526 2974444 cri.go:89] found id: "3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	I0914 23:11:33.378530 2974444 cri.go:89] found id: "3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b"
	I0914 23:11:33.378536 2974444 cri.go:89] found id: "bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45"
	I0914 23:11:33.378540 2974444 cri.go:89] found id: "d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678"
	I0914 23:11:33.378544 2974444 cri.go:89] found id: "b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	I0914 23:11:33.378548 2974444 cri.go:89] found id: "1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	I0914 23:11:33.378559 2974444 cri.go:89] found id: "f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197"
	I0914 23:11:33.378563 2974444 cri.go:89] found id: ""
	I0914 23:11:33.378568 2974444 cri.go:234] Stopping containers: [af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092 7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648 3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b 1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197]
	I0914 23:11:33.378621 2974444 ssh_runner.go:195] Run: which crictl
	I0914 23:11:33.383494 2974444 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092 7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648 3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352 3ddca4ba5663e330ca359974f99a382136b3178138059f123f035b5ef453142b bb70641b5371bd8bcb9d2648c449748c7bdc9a36befbcdba5dfb4add155e6e45 d8085d7321e65ea2e472b73ea3e32fb771ef86f98dd31e666c61b721765ec678 b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b 1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336 f13a0e17ffd11a8efa7730301233ce818c79283eb992e004b5809a5e02460197
	I0914 23:11:33.958673 2974444 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 23:11:34.063570 2974444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:11:34.079282 2974444 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 14 23:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 14 23:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 14 23:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 14 23:10 /etc/kubernetes/scheduler.conf
	
	I0914 23:11:34.079355 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 23:11:34.091262 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 23:11:34.103194 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 23:11:34.113645 2974444 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:34.113708 2974444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 23:11:34.123532 2974444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 23:11:34.133542 2974444 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0914 23:11:34.133609 2974444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 23:11:34.143725 2974444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:11:34.154448 2974444 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 23:11:34.154473 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:34.230124 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:30.784975 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:30.785334 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:30.785371 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:30.785423 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:30.856331 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:30.856404 2959146 cri.go:89] found id: ""
	I0914 23:11:30.856427 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:30.856521 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:30.867280 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:30.867355 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:30.918225 2959146 cri.go:89] found id: ""
	I0914 23:11:30.918249 2959146 logs.go:284] 0 containers: []
	W0914 23:11:30.918258 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:30.918264 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:30.918332 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:30.981093 2959146 cri.go:89] found id: ""
	I0914 23:11:30.981119 2959146 logs.go:284] 0 containers: []
	W0914 23:11:30.981128 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:30.981135 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:30.981193 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:31.032729 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:31.032754 2959146 cri.go:89] found id: ""
	I0914 23:11:31.032763 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:31.032817 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:31.038344 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:31.038416 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:31.099697 2959146 cri.go:89] found id: ""
	I0914 23:11:31.099723 2959146 logs.go:284] 0 containers: []
	W0914 23:11:31.099732 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:31.099739 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:31.099797 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:31.154597 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:31.154621 2959146 cri.go:89] found id: ""
	I0914 23:11:31.154628 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:31.154689 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:31.159728 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:31.159797 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:31.214626 2959146 cri.go:89] found id: ""
	I0914 23:11:31.214653 2959146 logs.go:284] 0 containers: []
	W0914 23:11:31.214663 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:31.214669 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:31.214734 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:31.263224 2959146 cri.go:89] found id: ""
	I0914 23:11:31.263251 2959146 logs.go:284] 0 containers: []
	W0914 23:11:31.263260 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:31.263269 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:31.263282 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:31.320322 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:31.320349 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:31.470016 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:31.470053 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:31.502615 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:31.502703 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:31.594226 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:31.594285 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:31.594312 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:31.670148 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:31.670216 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:31.811014 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:31.811095 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:31.854864 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:31.854890 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:34.407787 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:34.408198 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:34.408235 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:34.408297 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:34.483833 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:34.483858 2959146 cri.go:89] found id: ""
	I0914 23:11:34.483879 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:34.483936 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:34.488562 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:34.488647 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:34.544751 2959146 cri.go:89] found id: ""
	I0914 23:11:34.544772 2959146 logs.go:284] 0 containers: []
	W0914 23:11:34.544780 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:34.544786 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:34.544844 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:34.595719 2959146 cri.go:89] found id: ""
	I0914 23:11:34.595738 2959146 logs.go:284] 0 containers: []
	W0914 23:11:34.595747 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:34.595753 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:34.595807 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:34.651014 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:34.651033 2959146 cri.go:89] found id: ""
	I0914 23:11:34.651041 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:34.651096 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:34.656188 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:34.656253 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:34.719186 2959146 cri.go:89] found id: ""
	I0914 23:11:34.719206 2959146 logs.go:284] 0 containers: []
	W0914 23:11:34.719214 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:34.719221 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:34.719278 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:34.776413 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:34.776431 2959146 cri.go:89] found id: ""
	I0914 23:11:34.776440 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:34.776502 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:34.781564 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:34.781628 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:37.564733 2974444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.334569585s)
	I0914 23:11:37.564777 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.762970 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.845599 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:11:37.932078 2974444 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:11:37.932150 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:11:37.972607 2974444 api_server.go:72] duration metric: took 40.528346ms to wait for apiserver process to appear ...
	I0914 23:11:37.972634 2974444 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:11:37.972651 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:34.829280 2959146 cri.go:89] found id: ""
	I0914 23:11:34.829352 2959146 logs.go:284] 0 containers: []
	W0914 23:11:34.829374 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:34.829395 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:34.829477 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:34.881472 2959146 cri.go:89] found id: ""
	I0914 23:11:34.881533 2959146 logs.go:284] 0 containers: []
	W0914 23:11:34.881557 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:34.881583 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:34.881618 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:35.013196 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:35.013271 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:35.079839 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:35.079863 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:35.138390 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:35.138464 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:35.213662 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:35.213689 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:35.356033 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:35.356071 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:35.378856 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:35.378888 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:35.504253 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:35.504271 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:35.504283 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:38.071557 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:38.071927 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:38.071975 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:38.072034 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:38.125878 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:38.125901 2959146 cri.go:89] found id: ""
	I0914 23:11:38.125909 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:38.125963 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:38.130374 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:38.130439 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:38.180715 2959146 cri.go:89] found id: ""
	I0914 23:11:38.180737 2959146 logs.go:284] 0 containers: []
	W0914 23:11:38.180745 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:38.180751 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:38.180810 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:38.224339 2959146 cri.go:89] found id: ""
	I0914 23:11:38.224359 2959146 logs.go:284] 0 containers: []
	W0914 23:11:38.224367 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:38.224374 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:38.224429 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:38.264659 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:38.264678 2959146 cri.go:89] found id: ""
	I0914 23:11:38.264686 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:38.264740 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:38.270185 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:38.270252 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:38.314484 2959146 cri.go:89] found id: ""
	I0914 23:11:38.314505 2959146 logs.go:284] 0 containers: []
	W0914 23:11:38.314514 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:38.314520 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:38.314583 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:38.354953 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:38.354973 2959146 cri.go:89] found id: ""
	I0914 23:11:38.354981 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:38.355037 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:38.359420 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:38.359488 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:38.398456 2959146 cri.go:89] found id: ""
	I0914 23:11:38.398477 2959146 logs.go:284] 0 containers: []
	W0914 23:11:38.398485 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:38.398491 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:38.398557 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:38.437951 2959146 cri.go:89] found id: ""
	I0914 23:11:38.437972 2959146 logs.go:284] 0 containers: []
	W0914 23:11:38.437980 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:38.437990 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:38.438005 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:38.518028 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:38.518050 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:38.518062 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:38.563178 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:38.563206 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:38.676596 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:38.676634 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:38.723333 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:38.723365 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:38.773079 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:38.773113 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:38.832096 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:38.832125 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:38.959377 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:38.959413 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:42.973660 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:42.973698 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:41.484026 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:41.484460 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:41.484529 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:41.484592 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:41.524797 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:41.524820 2959146 cri.go:89] found id: ""
	I0914 23:11:41.524830 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:41.524885 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:41.529112 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:41.529176 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:41.568194 2959146 cri.go:89] found id: ""
	I0914 23:11:41.568216 2959146 logs.go:284] 0 containers: []
	W0914 23:11:41.568225 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:41.568231 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:41.568288 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:41.610969 2959146 cri.go:89] found id: ""
	I0914 23:11:41.610992 2959146 logs.go:284] 0 containers: []
	W0914 23:11:41.611000 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:41.611008 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:41.611066 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:41.651043 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:41.651107 2959146 cri.go:89] found id: ""
	I0914 23:11:41.651132 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:41.651219 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:41.655843 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:41.655942 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:41.696703 2959146 cri.go:89] found id: ""
	I0914 23:11:41.696766 2959146 logs.go:284] 0 containers: []
	W0914 23:11:41.696788 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:41.696809 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:41.696880 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:41.736645 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:41.736705 2959146 cri.go:89] found id: ""
	I0914 23:11:41.736728 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:41.736798 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:41.741121 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:41.741189 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:41.785022 2959146 cri.go:89] found id: ""
	I0914 23:11:41.785093 2959146 logs.go:284] 0 containers: []
	W0914 23:11:41.785117 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:41.785135 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:41.785221 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:41.828949 2959146 cri.go:89] found id: ""
	I0914 23:11:41.829013 2959146 logs.go:284] 0 containers: []
	W0914 23:11:41.829035 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:41.829062 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:41.829092 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:41.869909 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:41.869935 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:41.920392 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:41.920427 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:41.970339 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:41.970367 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:42.096502 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:42.096537 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:42.120353 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:42.120385 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:42.194102 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:42.194123 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:42.194135 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:42.239709 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:42.239739 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:47.973981 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:48.474627 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:44.837268 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:44.837676 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:44.837724 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:44.837782 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:44.877886 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:44.877914 2959146 cri.go:89] found id: ""
	I0914 23:11:44.877923 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:44.877985 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:44.882462 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:44.882550 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:44.922465 2959146 cri.go:89] found id: ""
	I0914 23:11:44.922487 2959146 logs.go:284] 0 containers: []
	W0914 23:11:44.922495 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:44.922503 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:44.922560 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:44.963051 2959146 cri.go:89] found id: ""
	I0914 23:11:44.963073 2959146 logs.go:284] 0 containers: []
	W0914 23:11:44.963081 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:44.963088 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:44.963145 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:45.007080 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:45.007105 2959146 cri.go:89] found id: ""
	I0914 23:11:45.007114 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:45.007177 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:45.011910 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:45.011995 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:45.055378 2959146 cri.go:89] found id: ""
	I0914 23:11:45.055452 2959146 logs.go:284] 0 containers: []
	W0914 23:11:45.055469 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:45.055477 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:45.055552 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:45.101017 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:45.101040 2959146 cri.go:89] found id: ""
	I0914 23:11:45.101048 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:45.101111 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:45.106068 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:45.106139 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:45.151755 2959146 cri.go:89] found id: ""
	I0914 23:11:45.151780 2959146 logs.go:284] 0 containers: []
	W0914 23:11:45.151800 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:45.151808 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:45.151876 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:45.194545 2959146 cri.go:89] found id: ""
	I0914 23:11:45.194569 2959146 logs.go:284] 0 containers: []
	W0914 23:11:45.194577 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:45.194586 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:45.194598 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:45.279170 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:45.279190 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:45.279203 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:45.326456 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:45.326488 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:45.428401 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:45.428437 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:45.474030 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:45.474060 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:45.524071 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:45.524105 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:45.569495 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:45.569523 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:45.701496 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:45.701528 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:48.225274 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:48.225635 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:48.225676 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:48.225732 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:48.265466 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:48.265490 2959146 cri.go:89] found id: ""
	I0914 23:11:48.265498 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:48.265561 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:48.269889 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:48.269960 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:48.311347 2959146 cri.go:89] found id: ""
	I0914 23:11:48.311368 2959146 logs.go:284] 0 containers: []
	W0914 23:11:48.311376 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:48.311384 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:48.311439 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:48.352792 2959146 cri.go:89] found id: ""
	I0914 23:11:48.352814 2959146 logs.go:284] 0 containers: []
	W0914 23:11:48.352822 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:48.352827 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:48.352887 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:48.392780 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:48.392801 2959146 cri.go:89] found id: ""
	I0914 23:11:48.392809 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:48.392864 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:48.397177 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:48.397262 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:48.437334 2959146 cri.go:89] found id: ""
	I0914 23:11:48.437361 2959146 logs.go:284] 0 containers: []
	W0914 23:11:48.437378 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:48.437385 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:48.437456 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:48.478424 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:48.478483 2959146 cri.go:89] found id: ""
	I0914 23:11:48.478498 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:48.478553 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:48.482743 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:48.482807 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:48.522517 2959146 cri.go:89] found id: ""
	I0914 23:11:48.522580 2959146 logs.go:284] 0 containers: []
	W0914 23:11:48.522596 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:48.522604 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:48.522670 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:48.565391 2959146 cri.go:89] found id: ""
	I0914 23:11:48.565413 2959146 logs.go:284] 0 containers: []
	W0914 23:11:48.565422 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:48.565431 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:48.565444 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:48.685869 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:48.685902 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:48.730660 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:48.730735 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:48.782478 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:48.782513 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:48.833763 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:48.833791 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:48.963411 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:48.963445 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:48.987488 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:48.987516 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:49.063122 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:49.063143 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:49.063155 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:53.475808 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:11:53.475848 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.470967 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:55796->192.168.76.2:8443: read: connection reset by peer
	I0914 23:11:54.471003 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.471312 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:54.474522 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.474847 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:51.612921 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:51.613323 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:51.613410 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 23:11:51.613472 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 23:11:51.654286 2959146 cri.go:89] found id: "2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:51.654312 2959146 cri.go:89] found id: ""
	I0914 23:11:51.654321 2959146 logs.go:284] 1 containers: [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51]
	I0914 23:11:51.654375 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:51.658959 2959146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 23:11:51.659027 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 23:11:51.701027 2959146 cri.go:89] found id: ""
	I0914 23:11:51.701047 2959146 logs.go:284] 0 containers: []
	W0914 23:11:51.701056 2959146 logs.go:286] No container was found matching "etcd"
	I0914 23:11:51.701062 2959146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 23:11:51.701117 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 23:11:51.741327 2959146 cri.go:89] found id: ""
	I0914 23:11:51.741349 2959146 logs.go:284] 0 containers: []
	W0914 23:11:51.741357 2959146 logs.go:286] No container was found matching "coredns"
	I0914 23:11:51.741364 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 23:11:51.741422 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 23:11:51.782479 2959146 cri.go:89] found id: "b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:51.782501 2959146 cri.go:89] found id: ""
	I0914 23:11:51.782509 2959146 logs.go:284] 1 containers: [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0]
	I0914 23:11:51.782565 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:51.786991 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 23:11:51.787101 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 23:11:51.827465 2959146 cri.go:89] found id: ""
	I0914 23:11:51.827488 2959146 logs.go:284] 0 containers: []
	W0914 23:11:51.827496 2959146 logs.go:286] No container was found matching "kube-proxy"
	I0914 23:11:51.827502 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 23:11:51.827559 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 23:11:51.869828 2959146 cri.go:89] found id: "44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:51.869850 2959146 cri.go:89] found id: ""
	I0914 23:11:51.869859 2959146 logs.go:284] 1 containers: [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387]
	I0914 23:11:51.869920 2959146 ssh_runner.go:195] Run: which crictl
	I0914 23:11:51.874299 2959146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 23:11:51.874365 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 23:11:51.916386 2959146 cri.go:89] found id: ""
	I0914 23:11:51.916407 2959146 logs.go:284] 0 containers: []
	W0914 23:11:51.916415 2959146 logs.go:286] No container was found matching "kindnet"
	I0914 23:11:51.916422 2959146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 23:11:51.916484 2959146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 23:11:51.956839 2959146 cri.go:89] found id: ""
	I0914 23:11:51.956860 2959146 logs.go:284] 0 containers: []
	W0914 23:11:51.956868 2959146 logs.go:286] No container was found matching "storage-provisioner"
	I0914 23:11:51.956877 2959146 logs.go:123] Gathering logs for kubelet ...
	I0914 23:11:51.956889 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 23:11:52.085225 2959146 logs.go:123] Gathering logs for dmesg ...
	I0914 23:11:52.085259 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 23:11:52.108988 2959146 logs.go:123] Gathering logs for describe nodes ...
	I0914 23:11:52.109021 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 23:11:52.185889 2959146 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 23:11:52.185911 2959146 logs.go:123] Gathering logs for kube-apiserver [2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51] ...
	I0914 23:11:52.185925 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2034500a9cbb6b30d08214ac551280644e9806aa60d5feef967e83854a27fb51"
	I0914 23:11:52.232390 2959146 logs.go:123] Gathering logs for kube-scheduler [b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0] ...
	I0914 23:11:52.232418 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4160d1560e7e803c0a5752691fc3350fbcaa4a13a31289ecce94d47803ca1e0"
	I0914 23:11:52.330383 2959146 logs.go:123] Gathering logs for kube-controller-manager [44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387] ...
	I0914 23:11:52.330419 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44ffe41fc5fa1f5f0c289d49d9926bfc8e7538e1dd5e33ef3baa6b6432614387"
	I0914 23:11:52.369875 2959146 logs.go:123] Gathering logs for CRI-O ...
	I0914 23:11:52.369902 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 23:11:52.420933 2959146 logs.go:123] Gathering logs for container status ...
	I0914 23:11:52.420969 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 23:11:54.974784 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.975103 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:55.474819 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:55.475217 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:55.974666 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:55.975038 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:56.474736 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:56.475086 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 23:11:56.974725 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:11:54.964202 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:11:54.964624 2959146 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0914 23:11:54.964686 2959146 kubeadm.go:640] restartCluster took 4m6.433063372s
	W0914 23:11:54.964760 2959146 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0914 23:11:54.964789 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 23:12:01.975361 2974444 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0914 23:12:01.975394 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.180748 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 23:12:03.180774 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 23:12:03.180790 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.264523 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 23:12:03.264597 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 23:12:03.474770 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.483979 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:03.484000 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:03.974133 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:03.989403 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:03.989480 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:04.474949 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:04.504167 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 23:12:04.504242 2974444 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 23:12:04.974104 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:05.006252 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0914 23:12:05.052031 2974444 api_server.go:141] control plane version: v1.28.1
	I0914 23:12:05.052057 2974444 api_server.go:131] duration metric: took 27.079416243s to wait for apiserver health ...
	I0914 23:12:05.052067 2974444 cni.go:84] Creating CNI manager for ""
	I0914 23:12:05.052074 2974444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:12:05.055138 2974444 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 23:12:05.211493 2959146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (10.246677867s)
	I0914 23:12:05.211560 2959146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:05.232941 2959146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 23:12:05.246305 2959146 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0914 23:12:05.246364 2959146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 23:12:05.262182 2959146 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 23:12:05.262219 2959146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 23:12:05.358917 2959146 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0914 23:12:05.358969 2959146 kubeadm.go:322] [preflight] Running pre-flight checks
	I0914 23:12:05.426235 2959146 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0914 23:12:05.426300 2959146 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0914 23:12:05.426334 2959146 kubeadm.go:322] OS: Linux
	I0914 23:12:05.426376 2959146 kubeadm.go:322] CGROUPS_CPU: enabled
	I0914 23:12:05.426421 2959146 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0914 23:12:05.426465 2959146 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0914 23:12:05.426510 2959146 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0914 23:12:05.426555 2959146 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0914 23:12:05.426600 2959146 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0914 23:12:05.426642 2959146 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0914 23:12:05.426687 2959146 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0914 23:12:05.426739 2959146 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0914 23:12:05.571970 2959146 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 23:12:05.572086 2959146 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 23:12:05.572177 2959146 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 23:12:05.996842 2959146 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 23:12:05.999314 2959146 out.go:204]   - Generating certificates and keys ...
	I0914 23:12:06.005107 2959146 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0914 23:12:06.005186 2959146 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0914 23:12:06.005264 2959146 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 23:12:06.005325 2959146 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0914 23:12:06.005393 2959146 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 23:12:06.005447 2959146 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0914 23:12:06.005509 2959146 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0914 23:12:06.005570 2959146 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0914 23:12:06.008249 2959146 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 23:12:06.008347 2959146 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 23:12:06.009756 2959146 kubeadm.go:322] [certs] Using the existing "sa" key
	I0914 23:12:06.009824 2959146 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 23:12:06.502529 2959146 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 23:12:06.767959 2959146 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 23:12:06.985082 2959146 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 23:12:07.230509 2959146 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 23:12:07.231113 2959146 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 23:12:07.236301 2959146 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 23:12:05.057759 2974444 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 23:12:05.073734 2974444 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 23:12:05.073751 2974444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 23:12:05.118804 2974444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 23:12:06.329020 2974444 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.210181727s)
	I0914 23:12:06.329048 2974444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:12:06.348758 2974444 system_pods.go:59] 7 kube-system pods found
	I0914 23:12:06.349864 2974444 system_pods.go:61] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:06.349899 2974444 system_pods.go:61] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:06.349924 2974444 system_pods.go:61] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:06.349948 2974444 system_pods.go:61] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 23:12:06.349971 2974444 system_pods.go:61] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:06.350003 2974444 system_pods.go:61] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:06.350026 2974444 system_pods.go:61] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:06.350047 2974444 system_pods.go:74] duration metric: took 20.99262ms to wait for pod list to return data ...
	I0914 23:12:06.350066 2974444 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:12:06.355026 2974444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 23:12:06.355092 2974444 node_conditions.go:123] node cpu capacity is 2
	I0914 23:12:06.355116 2974444 node_conditions.go:105] duration metric: took 5.031644ms to run NodePressure ...
	I0914 23:12:06.355147 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 23:12:06.695648 2974444 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 23:12:06.704784 2974444 kubeadm.go:787] kubelet initialised
	I0914 23:12:06.704853 2974444 kubeadm.go:788] duration metric: took 9.148483ms waiting for restarted kubelet to initialise ...
	I0914 23:12:06.704875 2974444 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:06.713967 2974444 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.728852 2974444 pod_ready.go:92] pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:06.728920 2974444 pod_ready.go:81] duration metric: took 14.880795ms waiting for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.728947 2974444 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.743535 2974444 pod_ready.go:92] pod "etcd-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:06.743604 2974444 pod_ready.go:81] duration metric: took 14.634289ms waiting for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:06.743633 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:08.783079 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:07.238531 2959146 out.go:204]   - Booting up control plane ...
	I0914 23:12:07.238660 2959146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 23:12:07.238753 2959146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 23:12:07.239382 2959146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 23:12:07.250480 2959146 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 23:12:07.252063 2959146 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 23:12:07.252327 2959146 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0914 23:12:07.354716 2959146 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 23:12:10.784465 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:13.283225 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:15.287316 2974444 pod_ready.go:102] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"False"
	I0914 23:12:15.794892 2974444 pod_ready.go:92] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.794915 2974444 pod_ready.go:81] duration metric: took 9.051260798s waiting for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.794927 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.809173 2974444 pod_ready.go:92] pod "kube-controller-manager-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.809192 2974444 pod_ready.go:81] duration metric: took 14.257592ms waiting for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.809204 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.818886 2974444 pod_ready.go:92] pod "kube-proxy-lprwg" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.818953 2974444 pod_ready.go:81] duration metric: took 9.740203ms waiting for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.818979 2974444 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.834530 2974444 pod_ready.go:92] pod "kube-scheduler-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:15.834599 2974444 pod_ready.go:81] duration metric: took 15.597858ms waiting for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:15.834623 2974444 pod_ready.go:38] duration metric: took 9.129724991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:15.834672 2974444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:12:15.850268 2974444 ops.go:34] apiserver oom_adj: -16
	I0914 23:12:15.850301 2974444 kubeadm.go:640] restartCluster took 52.541449512s
	I0914 23:12:15.850310 2974444 kubeadm.go:406] StartCluster complete in 52.635899566s
	I0914 23:12:15.850326 2974444 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:15.850399 2974444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:12:15.851384 2974444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:15.851696 2974444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 23:12:15.851994 2974444 config.go:182] Loaded profile config "pause-188837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:12:15.852117 2974444 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 23:12:15.854449 2974444 out.go:177] * Enabled addons: 
	I0914 23:12:15.852605 2974444 kapi.go:59] client config for pause-188837: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/pause-188837/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:12:15.856393 2974444 addons.go:502] enable addons completed in 4.268535ms: enabled=[]
	I0914 23:12:15.869598 2974444 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-188837" context rescaled to 1 replicas
	I0914 23:12:15.869681 2974444 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:12:15.871709 2974444 out.go:177] * Verifying Kubernetes components...
	I0914 23:12:15.857902 2959146 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503269 seconds
	I0914 23:12:15.858012 2959146 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 23:12:15.895199 2959146 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 23:12:16.425507 2959146 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 23:12:16.425714 2959146 kubeadm.go:322] [mark-control-plane] Marking the node kubernetes-upgrade-448798 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 23:12:16.937570 2959146 kubeadm.go:322] [bootstrap-token] Using token: qp07rb.ep5c50ece6odjcij
	I0914 23:12:16.939927 2959146 out.go:204]   - Configuring RBAC rules ...
	I0914 23:12:16.940058 2959146 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 23:12:16.947029 2959146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 23:12:16.956047 2959146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 23:12:16.960303 2959146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 23:12:16.965975 2959146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 23:12:16.969976 2959146 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 23:12:16.987722 2959146 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 23:12:17.254951 2959146 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0914 23:12:17.390863 2959146 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0914 23:12:17.393155 2959146 kubeadm.go:322] 
	I0914 23:12:17.393226 2959146 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0914 23:12:17.393231 2959146 kubeadm.go:322] 
	I0914 23:12:17.393303 2959146 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0914 23:12:17.393308 2959146 kubeadm.go:322] 
	I0914 23:12:17.393332 2959146 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0914 23:12:17.393387 2959146 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 23:12:17.393435 2959146 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 23:12:17.393440 2959146 kubeadm.go:322] 
	I0914 23:12:17.393490 2959146 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0914 23:12:17.393495 2959146 kubeadm.go:322] 
	I0914 23:12:17.393539 2959146 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 23:12:17.393544 2959146 kubeadm.go:322] 
	I0914 23:12:17.393592 2959146 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0914 23:12:17.393662 2959146 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 23:12:17.393726 2959146 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 23:12:17.393731 2959146 kubeadm.go:322] 
	I0914 23:12:17.393810 2959146 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 23:12:17.393881 2959146 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0914 23:12:17.393886 2959146 kubeadm.go:322] 
	I0914 23:12:17.393964 2959146 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qp07rb.ep5c50ece6odjcij \
	I0914 23:12:17.394060 2959146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc \
	I0914 23:12:17.394080 2959146 kubeadm.go:322] 	--control-plane 
	I0914 23:12:17.394085 2959146 kubeadm.go:322] 
	I0914 23:12:17.394164 2959146 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0914 23:12:17.394169 2959146 kubeadm.go:322] 
	I0914 23:12:17.394245 2959146 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qp07rb.ep5c50ece6odjcij \
	I0914 23:12:17.394340 2959146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a79084f9d3253dc9cd91fd80defbeb60d0999d7c0aaf6667eb02a65d009cd9dc 
	I0914 23:12:17.396936 2959146 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0914 23:12:17.397048 2959146 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 23:12:17.397062 2959146 cni.go:84] Creating CNI manager for ""
	I0914 23:12:17.397070 2959146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:12:17.399538 2959146 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 23:12:17.401404 2959146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 23:12:17.408038 2959146 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 23:12:17.408056 2959146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 23:12:17.440803 2959146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 23:12:18.381701 2959146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 23:12:18.381765 2959146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:12:18.381826 2959146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82 minikube.k8s.io/name=kubernetes-upgrade-448798 minikube.k8s.io/updated_at=2023_09_14T23_12_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 23:12:18.548738 2959146 ops.go:34] apiserver oom_adj: -16
	I0914 23:12:18.548761 2959146 kubeadm.go:1081] duration metric: took 167.060129ms to wait for elevateKubeSystemPrivileges.
	I0914 23:12:18.548775 2959146 kubeadm.go:406] StartCluster complete in 4m30.08369512s
	I0914 23:12:18.548790 2959146 settings.go:142] acquiring lock: {Name:mk797c549b93011f59a1b1413899d7ef3e9584bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:18.548850 2959146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:12:18.549935 2959146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/kubeconfig: {Name:mk7bbed64d52f47ff1629e01e738a8a5f092c9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 23:12:18.550157 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 23:12:18.550412 2959146 config.go:182] Loaded profile config "kubernetes-upgrade-448798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:12:18.550515 2959146 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0914 23:12:18.550574 2959146 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-448798"
	I0914 23:12:18.550588 2959146 addons.go:231] Setting addon storage-provisioner=true in "kubernetes-upgrade-448798"
	W0914 23:12:18.550594 2959146 addons.go:240] addon storage-provisioner should already be in state true
	I0914 23:12:18.550650 2959146 host.go:66] Checking if "kubernetes-upgrade-448798" exists ...
	I0914 23:12:18.550849 2959146 kapi.go:59] client config for kubernetes-upgrade-448798: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kubernetes-upgrade-448798/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kubernetes-upgrade-448798/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:12:18.551420 2959146 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-448798 --format={{.State.Status}}
	I0914 23:12:18.551865 2959146 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-448798"
	I0914 23:12:18.551883 2959146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-448798"
	I0914 23:12:18.552254 2959146 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-448798 --format={{.State.Status}}
	I0914 23:12:18.587868 2959146 kapi.go:59] client config for kubernetes-upgrade-448798: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kubernetes-upgrade-448798/client.crt", KeyFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kubernetes-upgrade-448798/client.key", CAFile:"/home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 23:12:18.608134 2959146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 23:12:18.612753 2959146 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:12:18.612774 2959146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 23:12:18.612841 2959146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-448798
	I0914 23:12:18.633909 2959146 addons.go:231] Setting addon default-storageclass=true in "kubernetes-upgrade-448798"
	W0914 23:12:18.633984 2959146 addons.go:240] addon default-storageclass should already be in state true
	I0914 23:12:18.634010 2959146 host.go:66] Checking if "kubernetes-upgrade-448798" exists ...
	I0914 23:12:18.634459 2959146 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-448798 --format={{.State.Status}}
	I0914 23:12:18.647190 2959146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36562 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/kubernetes-upgrade-448798/id_rsa Username:docker}
	I0914 23:12:18.681904 2959146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-448798" context rescaled to 1 replicas
	I0914 23:12:18.681956 2959146 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 23:12:18.684357 2959146 out.go:177] * Verifying Kubernetes components...
	I0914 23:12:15.873843 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:16.013326 2974444 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 23:12:16.013369 2974444 node_ready.go:35] waiting up to 6m0s for node "pause-188837" to be "Ready" ...
	I0914 23:12:16.016463 2974444 node_ready.go:49] node "pause-188837" has status "Ready":"True"
	I0914 23:12:16.016486 2974444 node_ready.go:38] duration metric: took 3.104442ms waiting for node "pause-188837" to be "Ready" ...
	I0914 23:12:16.016520 2974444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:16.022660 2974444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.181590 2974444 pod_ready.go:92] pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.181613 2974444 pod_ready.go:81] duration metric: took 158.921262ms waiting for pod "coredns-5dd5756b68-fsjl2" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.181626 2974444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.581524 2974444 pod_ready.go:92] pod "etcd-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.581595 2974444 pod_ready.go:81] duration metric: took 399.960574ms waiting for pod "etcd-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.581620 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.982027 2974444 pod_ready.go:92] pod "kube-apiserver-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:16.982050 2974444 pod_ready.go:81] duration metric: took 400.42285ms waiting for pod "kube-apiserver-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:16.982063 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.381446 2974444 pod_ready.go:92] pod "kube-controller-manager-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:17.381516 2974444 pod_ready.go:81] duration metric: took 399.44384ms waiting for pod "kube-controller-manager-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.381544 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.780952 2974444 pod_ready.go:92] pod "kube-proxy-lprwg" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:17.781022 2974444 pod_ready.go:81] duration metric: took 399.456312ms waiting for pod "kube-proxy-lprwg" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:17.781047 2974444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:18.195041 2974444 pod_ready.go:92] pod "kube-scheduler-pause-188837" in "kube-system" namespace has status "Ready":"True"
	I0914 23:12:18.195114 2974444 pod_ready.go:81] duration metric: took 414.043774ms waiting for pod "kube-scheduler-pause-188837" in "kube-system" namespace to be "Ready" ...
	I0914 23:12:18.195138 2974444 pod_ready.go:38] duration metric: took 2.178606476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 23:12:18.195167 2974444 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:12:18.195254 2974444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:12:18.213428 2974444 api_server.go:72] duration metric: took 2.34369082s to wait for apiserver process to appear ...
	I0914 23:12:18.213495 2974444 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:12:18.213529 2974444 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0914 23:12:18.222994 2974444 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0914 23:12:18.224905 2974444 api_server.go:141] control plane version: v1.28.1
	I0914 23:12:18.224926 2974444 api_server.go:131] duration metric: took 11.411333ms to wait for apiserver health ...
	I0914 23:12:18.224934 2974444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:12:18.385978 2974444 system_pods.go:59] 7 kube-system pods found
	I0914 23:12:18.386014 2974444 system_pods.go:61] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:18.386021 2974444 system_pods.go:61] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:18.386027 2974444 system_pods.go:61] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:18.386051 2974444 system_pods.go:61] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running
	I0914 23:12:18.386069 2974444 system_pods.go:61] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:18.386075 2974444 system_pods.go:61] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:18.386086 2974444 system_pods.go:61] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:18.386092 2974444 system_pods.go:74] duration metric: took 161.152326ms to wait for pod list to return data ...
	I0914 23:12:18.386106 2974444 default_sa.go:34] waiting for default service account to be created ...
	I0914 23:12:18.589115 2974444 default_sa.go:45] found service account: "default"
	I0914 23:12:18.589135 2974444 default_sa.go:55] duration metric: took 203.022857ms for default service account to be created ...
	I0914 23:12:18.589146 2974444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 23:12:18.784665 2974444 system_pods.go:86] 7 kube-system pods found
	I0914 23:12:18.784746 2974444 system_pods.go:89] "coredns-5dd5756b68-fsjl2" [67bad9d6-02e3-402b-b63e-83403a6c00c4] Running
	I0914 23:12:18.784770 2974444 system_pods.go:89] "etcd-pause-188837" [93cf2058-c73c-49a3-9199-8f891b7bf9a7] Running
	I0914 23:12:18.784797 2974444 system_pods.go:89] "kindnet-rw9vg" [fe2fe062-01ec-4c26-b6d1-c181f2d685ea] Running
	I0914 23:12:18.784828 2974444 system_pods.go:89] "kube-apiserver-pause-188837" [4ea4415b-c449-4b3c-9613-cf902f8436ea] Running
	I0914 23:12:18.784854 2974444 system_pods.go:89] "kube-controller-manager-pause-188837" [eb732242-4a9c-4f7b-9aa5-bd9b142821c8] Running
	I0914 23:12:18.784881 2974444 system_pods.go:89] "kube-proxy-lprwg" [b888ea22-8d29-4c36-a973-02cd1262b1ae] Running
	I0914 23:12:18.784907 2974444 system_pods.go:89] "kube-scheduler-pause-188837" [bb6908cf-28a3-43f6-ad86-824aa11d1ade] Running
	I0914 23:12:18.784931 2974444 system_pods.go:126] duration metric: took 195.77766ms to wait for k8s-apps to be running ...
	I0914 23:12:18.784953 2974444 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 23:12:18.785023 2974444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:18.799837 2974444 system_svc.go:56] duration metric: took 14.874305ms WaitForService to wait for kubelet.
	I0914 23:12:18.799860 2974444 kubeadm.go:581] duration metric: took 2.930131172s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 23:12:18.799878 2974444 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:12:18.981756 2974444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 23:12:18.981782 2974444 node_conditions.go:123] node cpu capacity is 2
	I0914 23:12:18.981792 2974444 node_conditions.go:105] duration metric: took 181.909343ms to run NodePressure ...
	I0914 23:12:18.981804 2974444 start.go:228] waiting for startup goroutines ...
	I0914 23:12:18.981811 2974444 start.go:233] waiting for cluster config update ...
	I0914 23:12:18.981818 2974444 start.go:242] writing updated cluster config ...
	I0914 23:12:18.982136 2974444 ssh_runner.go:195] Run: rm -f paused
	I0914 23:12:19.077458 2974444 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 23:12:19.079949 2974444 out.go:177] * Done! kubectl is now configured to use "pause-188837" cluster and "default" namespace by default
	I0914 23:12:18.686632 2959146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 23:12:18.683620 2959146 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 23:12:18.686664 2959146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 23:12:18.686714 2959146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-448798
	I0914 23:12:18.717094 2959146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36562 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/kubernetes-upgrade-448798/id_rsa Username:docker}
	I0914 23:12:18.804696 2959146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 23:12:18.804776 2959146 api_server.go:52] waiting for apiserver process to appear ...
	I0914 23:12:18.804825 2959146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 23:12:18.827192 2959146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 23:12:18.910595 2959146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 23:12:19.633240 2959146 api_server.go:72] duration metric: took 951.254099ms to wait for apiserver process to appear ...
	I0914 23:12:19.633265 2959146 api_server.go:88] waiting for apiserver healthz status ...
	I0914 23:12:19.633282 2959146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 23:12:19.633545 2959146 start.go:917] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0914 23:12:19.645179 2959146 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 23:12:19.647270 2959146 api_server.go:141] control plane version: v1.28.1
	I0914 23:12:19.647307 2959146 api_server.go:131] duration metric: took 14.035561ms to wait for apiserver health ...
	I0914 23:12:19.647317 2959146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 23:12:19.668252 2959146 system_pods.go:59] 4 kube-system pods found
	I0914 23:12:19.668286 2959146 system_pods.go:61] "etcd-kubernetes-upgrade-448798" [c631d3e0-ba2d-41ad-83d1-4f8ca4e3d3b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 23:12:19.668297 2959146 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-448798" [7be7af7e-341d-46b8-a7ce-f0a798dbfd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 23:12:19.668306 2959146 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-448798" [40582af6-eb41-4862-ba27-28a6cfaf87c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 23:12:19.668315 2959146 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-448798" [8c503658-ed66-460f-ba68-b3acf7e094ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 23:12:19.668322 2959146 system_pods.go:74] duration metric: took 20.998691ms to wait for pod list to return data ...
	I0914 23:12:19.668330 2959146 kubeadm.go:581] duration metric: took 986.35254ms to wait for : map[apiserver:true system_pods:true] ...
	I0914 23:12:19.668342 2959146 node_conditions.go:102] verifying NodePressure condition ...
	I0914 23:12:19.718697 2959146 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 23:12:19.718735 2959146 node_conditions.go:123] node cpu capacity is 2
	I0914 23:12:19.718745 2959146 node_conditions.go:105] duration metric: took 50.399126ms to run NodePressure ...
	I0914 23:12:19.718758 2959146 start.go:228] waiting for startup goroutines ...
	I0914 23:12:19.943789 2959146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.033119766s)
	I0914 23:12:19.943787 2959146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116565257s)
	I0914 23:12:19.945691 2959146 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 23:12:19.948412 2959146 addons.go:502] enable addons completed in 1.397882855s: enabled=[default-storageclass storage-provisioner]
	I0914 23:12:19.948452 2959146 start.go:233] waiting for cluster config update ...
	I0914 23:12:19.948464 2959146 start.go:242] writing updated cluster config ...
	I0914 23:12:19.948760 2959146 ssh_runner.go:195] Run: rm -f paused
	I0914 23:12:20.081655 2959146 start.go:600] kubectl: 1.28.2, cluster: 1.28.1 (minor skew: 0)
	I0914 23:12:20.083552 2959146 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-448798" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.241146077Z" level=info msg="Creating container: kube-system/kindnet-rw9vg/kindnet-cni" id=29d23272-a7ab-4496-80b6-d1134a448294 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.241199517Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.250554999Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f2d9bb4c822f595dd7234bc4c8e437e194b11674572011de73a6695af0479dcd/merged/etc/passwd: no such file or directory"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.250755762Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2d9bb4c822f595dd7234bc4c8e437e194b11674572011de73a6695af0479dcd/merged/etc/group: no such file or directory"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.473400860Z" level=info msg="Created container b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd: kube-system/kindnet-rw9vg/kindnet-cni" id=29d23272-a7ab-4496-80b6-d1134a448294 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.474206234Z" level=info msg="Starting container: b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd" id=3dabfb32-bc8e-40e4-8b29-e52a75d503a1 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.496349901Z" level=info msg="Started container" PID=3602 containerID=b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd description=kube-system/kindnet-rw9vg/kindnet-cni id=3dabfb32-bc8e-40e4-8b29-e52a75d503a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.553497249Z" level=info msg="Created container b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35: kube-system/coredns-5dd5756b68-fsjl2/coredns" id=2fd18101-61a7-4d4c-ab5e-4318c70926e1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.554166409Z" level=info msg="Starting container: b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35" id=567e5f5e-346f-45d3-bcd2-f230dee12dd3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.570937255Z" level=info msg="Started container" PID=3600 containerID=b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35 description=kube-system/coredns-5dd5756b68-fsjl2/coredns id=567e5f5e-346f-45d3-bcd2-f230dee12dd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.612923463Z" level=info msg="Created container 5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c: kube-system/kube-proxy-lprwg/kube-proxy" id=cef7af05-14b9-4cb6-b45a-843c12c3cfe5 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.613440705Z" level=info msg="Starting container: 5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c" id=0449c71c-b81b-4e6c-b478-121c64f6c8e0 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.649250030Z" level=info msg="Started container" PID=3593 containerID=5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c description=kube-system/kube-proxy-lprwg/kube-proxy id=0449c71c-b81b-4e6c-b478-121c64f6c8e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.905076492Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933360021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933393440Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933409399Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992937914Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992974624Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992993685Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004911151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004955425Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004972804Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.025493235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.025531405Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5c69b06d3e8f8       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   16 seconds ago       Running             kube-proxy                2                   10108216db33f       kube-proxy-lprwg
	b5e6210b023d0       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   16 seconds ago       Running             kindnet-cni               2                   4efca5701c865       kindnet-rw9vg
	b6db342357f51       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   16 seconds ago       Running             coredns                   2                   2edc7f20ed3d3       coredns-5dd5756b68-fsjl2
	b803eeba32c17       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   24 seconds ago       Running             kube-apiserver            3                   a5fbc85f79339       kube-apiserver-pause-188837
	524098e08c3aa       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   25 seconds ago       Running             kube-scheduler            3                   81f238f5cfc9e       kube-scheduler-pause-188837
	14a3a0c1b0196       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   25 seconds ago       Running             kube-controller-manager   3                   a0d11b5b50a01       kube-controller-manager-pause-188837
	d2cbf1c641eee       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   25 seconds ago       Running             etcd                      3                   08ee086b60e39       etcd-pause-188837
	ea7a90d540ab0       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   47 seconds ago       Exited              kube-apiserver            2                   a5fbc85f79339       kube-apiserver-pause-188837
	af88552a2fe0e       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   50 seconds ago       Exited              kube-controller-manager   2                   a0d11b5b50a01       kube-controller-manager-pause-188837
	cff2edb1f640f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   50 seconds ago       Exited              etcd                      2                   08ee086b60e39       etcd-pause-188837
	7817de3dddb7f       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   54 seconds ago       Exited              kube-scheduler            2                   81f238f5cfc9e       kube-scheduler-pause-188837
	3096294cf9ef2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Exited              coredns                   1                   2edc7f20ed3d3       coredns-5dd5756b68-fsjl2
	b148ea09494ca       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Exited              kindnet-cni               1                   4efca5701c865       kindnet-rw9vg
	1ee06b5330745       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   About a minute ago   Exited              kube-proxy                1                   10108216db33f       kube-proxy-lprwg
	
	* 
	* ==> coredns [3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55835 - 33502 "HINFO IN 8412816801247314877.2926992871546622061. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013239459s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33469 - 33770 "HINFO IN 8451479171937443088.8887657869746167443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024076293s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-188837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-188837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=pause-188837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T23_10_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 23:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-188837
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:12:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-188837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b67473cea5640c1a9697609a1d1def3
	  System UUID:                4ea1df65-a4c0-4fb2-b563-32e18331094b
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fsjl2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     86s
	  kube-system                 etcd-pause-188837                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-rw9vg                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      86s
	  kube-system                 kube-apiserver-pause-188837             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-pause-188837    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-lprwg                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-pause-188837             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x8 over 108s)  kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           87s                  node-controller  Node pause-188837 event: Registered Node pause-188837 in Controller
	  Normal  NodeReady                82s                  kubelet          Node pause-188837 status is now: NodeReady
	  Normal  Starting                 44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)    kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)    kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)    kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                   node-controller  Node pause-188837 event: Registered Node pause-188837 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001074] FS-Cache: O-key=[8] '85703b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000899] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001017] FS-Cache: N-key=[8] '85703b0000000000'
	[  +2.012590] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000000e476c3
	[  +0.001056] FS-Cache: O-key=[8] '84703b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.001024] FS-Cache: N-key=[8] '84703b0000000000'
	[  +0.406786] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000007a274cdd
	[  +0.001021] FS-Cache: O-key=[8] '8a703b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000038968ff8
	[  +0.001006] FS-Cache: N-key=[8] '8a703b0000000000'
	[  +4.128718] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=000000ea [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000fe6607cc{9P.session} n=000000001f02128f
	[  +0.001183] FS-Cache: O-key=[10] '34333134393838363731'
	[  +0.000776] FS-Cache: N-cookie c=000000eb [p=00000002 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=00000000fe6607cc{9P.session} n=00000000648dde5c
	[  +0.001093] FS-Cache: N-key=[10] '34333134393838363731'
	
	* 
	* ==> etcd [cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092] <==
	* {"level":"info","ts":"2023-09-14T23:11:30.533383Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T23:11:32.423508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.424715Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-188837 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T23:11:32.424729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:32.424863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:32.425817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-14T23:11:32.425846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T23:11:32.426053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:32.42607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:33.621686Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T23:11:33.621733Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-188837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2023-09-14T23:11:33.621823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.621848Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.623456Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.623528Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T23:11:33.623629Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-09-14T23:11:33.62603Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:33.626192Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:33.626228Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-188837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [d2cbf1c641eee0609e11c54628c393fb943c1d87046116ab12815a085d6b78a2] <==
	* {"level":"info","ts":"2023-09-14T23:11:56.044774Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.046846Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.046905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.045052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-09-14T23:11:56.047117Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-09-14T23:11:56.04725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T23:11:56.047307Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T23:11:56.045134Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:56.055026Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:56.055704Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T23:11:56.055771Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T23:11:57.514532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.514807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.514847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.51488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.528725Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-188837 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T23:11:57.528911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:57.529096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:57.529126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:57.529146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:57.532626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T23:11:57.538901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  23:12:21 up 22:54,  0 users,  load average: 3.18, 2.97, 2.21
	Linux pause-188837 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b] <==
	* I0914 23:11:11.810897       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 23:11:11.810956       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0914 23:11:11.811152       1 main.go:116] setting mtu 1500 for CNI 
	I0914 23:11:11.811169       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 23:11:11.811181       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kindnet [b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd] <==
	* I0914 23:12:04.574898       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 23:12:04.574968       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0914 23:12:04.575103       1 main.go:116] setting mtu 1500 for CNI 
	I0914 23:12:04.575113       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 23:12:04.575123       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 23:12:04.904750       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0914 23:12:04.904784       1 main.go:227] handling current node
	I0914 23:12:14.989076       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0914 23:12:14.989110       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b803eeba32c1724d92ade8d79224f2c1787cdd4b66a763100e569e000e33eab4] <==
	* I0914 23:12:02.836069       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:12:02.860770       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0914 23:12:02.864540       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0914 23:12:02.864951       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:12:02.872185       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 23:12:03.266814       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 23:12:03.272811       1 aggregator.go:166] initial CRD sync complete...
	I0914 23:12:03.272899       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 23:12:03.272930       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 23:12:03.272966       1 cache.go:39] Caches are synced for autoregister controller
	I0914 23:12:03.279557       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 23:12:03.279751       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 23:12:03.324333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 23:12:03.325597       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 23:12:03.330382       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 23:12:03.332798       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 23:12:03.335339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 23:12:03.335741       1 shared_informer.go:318] Caches are synced for configmaps
	E0914 23:12:03.358347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 23:12:03.832135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 23:12:06.318375       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 23:12:06.541097       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 23:12:06.563296       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 23:12:06.659752       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 23:12:06.680128       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [ea7a90d540ab08aa09e91b154fed113e2850dad3a72d2325a74330e9fcd8a247] <==
	* W0914 23:11:49.008192       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 23:11:51.978476       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 23:11:53.070246       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0914 23:11:54.463871       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-controller-manager [14a3a0c1b0196eb3e57d78ac2adf9207f6ac70707f42f6ce0c124a7cc2b4c586] <==
	* I0914 23:12:15.707325       1 shared_informer.go:318] Caches are synced for TTL
	I0914 23:12:15.712463       1 shared_informer.go:318] Caches are synced for namespace
	I0914 23:12:15.715510       1 shared_informer.go:318] Caches are synced for service account
	I0914 23:12:15.715543       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0914 23:12:15.715951       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 23:12:15.716012       1 shared_informer.go:318] Caches are synced for persistent volume
	I0914 23:12:15.715916       1 shared_informer.go:318] Caches are synced for GC
	I0914 23:12:15.715929       1 shared_informer.go:318] Caches are synced for expand
	I0914 23:12:15.716266       1 shared_informer.go:318] Caches are synced for deployment
	I0914 23:12:15.717173       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 23:12:15.717208       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 23:12:15.717236       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 23:12:15.717258       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 23:12:15.717284       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 23:12:15.720258       1 shared_informer.go:318] Caches are synced for disruption
	I0914 23:12:15.786576       1 shared_informer.go:318] Caches are synced for crt configmap
	I0914 23:12:15.806033       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0914 23:12:15.842433       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0914 23:12:15.844940       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 23:12:15.892485       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 23:12:15.897437       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0914 23:12:15.909940       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 23:12:16.216043       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 23:12:16.228381       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 23:12:16.228411       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd] <==
	* I0914 23:11:31.126705       1 serving.go:348] Generated self-signed cert in-memory
	I0914 23:11:31.802155       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0914 23:11:31.802185       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:11:31.803454       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 23:11:31.803540       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:11:31.804426       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0914 23:11:31.804488       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336] <==
	* 
	* 
	* ==> kube-proxy [5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c] <==
	* I0914 23:12:04.761290       1 server_others.go:69] "Using iptables proxy"
	I0914 23:12:04.861584       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0914 23:12:05.028206       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 23:12:05.123081       1 server_others.go:152] "Using iptables Proxier"
	I0914 23:12:05.123121       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 23:12:05.123130       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 23:12:05.123250       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 23:12:05.123523       1 server.go:846] "Version info" version="v1.28.1"
	I0914 23:12:05.123538       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:12:05.132706       1 config.go:188] "Starting service config controller"
	I0914 23:12:05.132748       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 23:12:05.132773       1 config.go:97] "Starting endpoint slice config controller"
	I0914 23:12:05.132778       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 23:12:05.133349       1 config.go:315] "Starting node config controller"
	I0914 23:12:05.133356       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 23:12:05.243233       1 shared_informer.go:318] Caches are synced for node config
	I0914 23:12:05.243387       1 shared_informer.go:318] Caches are synced for service config
	I0914 23:12:05.243501       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [524098e08c3aa73c99aceb4637f1484c93251983bf50164e93c1f5a949f8099c] <==
	* I0914 23:12:02.170657       1 serving.go:348] Generated self-signed cert in-memory
	I0914 23:12:04.956086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 23:12:04.956118       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:12:04.979944       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 23:12:04.980073       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0914 23:12:04.980103       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0914 23:12:04.980128       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 23:12:04.986502       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 23:12:04.986529       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 23:12:04.986549       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0914 23:12:04.986554       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 23:12:05.082843       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0914 23:12:05.091256       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 23:12:05.091316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648] <==
	* E0914 23:11:31.303111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.428962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.429006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.440679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.440767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.743294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.743345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.756058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.756180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.946212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.946251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.038131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.038172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.113359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.113402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.146208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.146254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.192088       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.192141       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.302076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.302123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:33.783567       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0914 23:11:33.784203       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 23:11:33.784279       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 23:11:33.784381       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Sep 14 23:11:55 pause-188837 kubelet[3313]: I0914 23:11:55.672584    3313 scope.go:117] "RemoveContainer" containerID="7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: I0914 23:11:55.674169    3313 kubelet_node_status.go:70] "Attempting to register node" node="pause-188837"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: E0914 23:11:55.676550    3313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-188837"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: E0914 23:11:55.873919    3313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-188837?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Sep 14 23:11:56 pause-188837 kubelet[3313]: I0914 23:11:56.056786    3313 scope.go:117] "RemoveContainer" containerID="ea7a90d540ab08aa09e91b154fed113e2850dad3a72d2325a74330e9fcd8a247"
	Sep 14 23:11:56 pause-188837 kubelet[3313]: E0914 23:11:56.154844    3313 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-188837.1784e6c601100c99", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-188837", UID:"pause-188837", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-188837"}, FirstTimestamp:time.Date(2023, time.September, 14, 23, 11, 37, 893891225, time.Local), LastTimestamp:time.Da
te(2023, time.September, 14, 23, 11, 37, 893891225, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-188837"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.76.2:8443: connect: connection refused'(may retry after sleeping)
	Sep 14 23:11:57 pause-188837 kubelet[3313]: I0914 23:11:57.278027    3313 kubelet_node_status.go:70] "Attempting to register node" node="pause-188837"
	Sep 14 23:11:58 pause-188837 kubelet[3313]: E0914 23:11:58.158480    3313 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-188837\" not found"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.334391    3313 kubelet_node_status.go:108] "Node was previously registered" node="pause-188837"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.334499    3313 kubelet_node_status.go:73] "Successfully registered node" node="pause-188837"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.337099    3313 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.337963    3313 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.916146    3313 apiserver.go:52] "Watching apiserver"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.924904    3313 topology_manager.go:215] "Topology Admit Handler" podUID="fe2fe062-01ec-4c26-b6d1-c181f2d685ea" podNamespace="kube-system" podName="kindnet-rw9vg"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.925025    3313 topology_manager.go:215] "Topology Admit Handler" podUID="b888ea22-8d29-4c36-a973-02cd1262b1ae" podNamespace="kube-system" podName="kube-proxy-lprwg"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.925084    3313 topology_manager.go:215] "Topology Admit Handler" podUID="67bad9d6-02e3-402b-b63e-83403a6c00c4" podNamespace="kube-system" podName="coredns-5dd5756b68-fsjl2"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.014264    3313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028694    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b888ea22-8d29-4c36-a973-02cd1262b1ae-xtables-lock\") pod \"kube-proxy-lprwg\" (UID: \"b888ea22-8d29-4c36-a973-02cd1262b1ae\") " pod="kube-system/kube-proxy-lprwg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028751    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-xtables-lock\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028806    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b888ea22-8d29-4c36-a973-02cd1262b1ae-lib-modules\") pod \"kube-proxy-lprwg\" (UID: \"b888ea22-8d29-4c36-a973-02cd1262b1ae\") " pod="kube-system/kube-proxy-lprwg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028854    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-cni-cfg\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028881    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-lib-modules\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.226107    3313 scope.go:117] "RemoveContainer" containerID="3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.227875    3313 scope.go:117] "RemoveContainer" containerID="1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.228479    3313 scope.go:117] "RemoveContainer" containerID="b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188837 -n pause-188837
helpers_test.go:261: (dbg) Run:  kubectl --context pause-188837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-188837
helpers_test.go:235: (dbg) docker inspect pause-188837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54",
	        "Created": "2023-09-14T23:10:15.86430085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2971242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-14T23:10:16.234828749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dc3fcbe613a9f8e1e2fcaa6abcc8f1cc38d54475810991578dbd56e1d327de1f",
	        "ResolvConfPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/hostname",
	        "HostsPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/hosts",
	        "LogPath": "/var/lib/docker/containers/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54/238fdd0a23ddbe7047bc942943bbc9d654ea46ac1d8ea5fe344869f54ecb3c54-json.log",
	        "Name": "/pause-188837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-188837:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-188837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b-init/diff:/var/lib/docker/overlay2/01d6f4b44b4d3652921d9dfec86a5600f173a3b2af60ce73c84e7669723804ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eba455f22ac7b4e5d622158a95ba5cae31e4b21aa6ec6f8909253dbaf86a155b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-188837",
	                "Source": "/var/lib/docker/volumes/pause-188837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-188837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-188837",
	                "name.minikube.sigs.k8s.io": "pause-188837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b38fbaa301d8e5c882d9ff023f8008a2135ca03425cf0c30950c6428d6b6116",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36579"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36578"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36575"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36577"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36576"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2b38fbaa301d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-188837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "238fdd0a23dd",
	                        "pause-188837"
	                    ],
	                    "NetworkID": "22fc45c87a68c0c8994f05a99ada433a32bf4fab19f3b1153960f5158ea51118",
	                    "EndpointID": "fd54ae9dc21225805e65762df1ba27d63f93a4c7e19527a90c861d51035e502a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-188837 -n pause-188837
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-188837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-188837 logs -n 25: (2.500567252s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start   | -p insufficient-storage-727065 | insufficient-storage-727065 | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-727065 | insufficient-storage-727065 | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC | 14 Sep 23 23:04 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:04 UTC | 14 Sep 23 23:05 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:05 UTC | 14 Sep 23 23:06 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-836473 sudo    | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-836473 sudo    | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-836473         | NoKubernetes-836473         | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:06 UTC |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC | 14 Sep 23 23:07 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-595333      | missing-upgrade-595333      | jenkins | v1.31.2 | 14 Sep 23 23:06 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:12 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-595333      | missing-upgrade-595333      | jenkins | v1.31.2 | 14 Sep 23 23:07 UTC | 14 Sep 23 23:07 UTC |
	| start   | -p stopped-upgrade-686061      | stopped-upgrade-686061      | jenkins | v1.31.2 | 14 Sep 23 23:08 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-686061      | stopped-upgrade-686061      | jenkins | v1.31.2 | 14 Sep 23 23:09 UTC | 14 Sep 23 23:09 UTC |
	| start   | -p running-upgrade-629800      | running-upgrade-629800      | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-629800      | running-upgrade-629800      | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC | 14 Sep 23 23:10 UTC |
	| start   | -p pause-188837 --memory=2048  | pause-188837                | jenkins | v1.31.2 | 14 Sep 23 23:10 UTC | 14 Sep 23 23:11 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-188837                | pause-188837                | jenkins | v1.31.2 | 14 Sep 23 23:11 UTC | 14 Sep 23 23:12 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:12 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-448798   | kubernetes-upgrade-448798   | jenkins | v1.31.2 | 14 Sep 23 23:12 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 23:12:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 23:12:20.440377 2979678 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:12:20.441197 2979678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:12:20.441208 2979678 out.go:309] Setting ErrFile to fd 2...
	I0914 23:12:20.441215 2979678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:12:20.441665 2979678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:12:20.442279 2979678 out.go:303] Setting JSON to false
	I0914 23:12:20.443956 2979678 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82485,"bootTime":1694650655,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:12:20.444054 2979678 start.go:138] virtualization:  
	I0914 23:12:20.448251 2979678 out.go:177] * [kubernetes-upgrade-448798] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:12:20.450323 2979678 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:12:20.452041 2979678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:12:20.460669 2979678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:12:20.452840 2979678 notify.go:220] Checking for updates...
	I0914 23:12:20.465069 2979678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:12:20.467038 2979678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:12:20.468755 2979678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:12:20.471169 2979678 config.go:182] Loaded profile config "kubernetes-upgrade-448798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:12:20.471810 2979678 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:12:20.503709 2979678 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:12:20.503799 2979678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:12:20.646545 2979678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:12:20.634352967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:12:20.646669 2979678 docker.go:294] overlay module found
	I0914 23:12:20.648916 2979678 out.go:177] * Using the docker driver based on existing profile
	I0914 23:12:20.651240 2979678 start.go:298] selected driver: docker
	I0914 23:12:20.651261 2979678 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-448798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-448798 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:12:20.651368 2979678 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:12:20.652094 2979678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:12:20.776484 2979678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 23:12:20.765823923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:12:20.776843 2979678 cni.go:84] Creating CNI manager for ""
	I0914 23:12:20.776854 2979678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 23:12:20.776865 2979678 start_flags.go:321] config:
	{Name:kubernetes-upgrade-448798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-448798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 23:12:20.779515 2979678 out.go:177] * Starting control plane node kubernetes-upgrade-448798 in cluster kubernetes-upgrade-448798
	I0914 23:12:20.781551 2979678 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 23:12:20.783547 2979678 out.go:177] * Pulling base image ...
	I0914 23:12:20.785558 2979678 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 23:12:20.785609 2979678 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 23:12:20.785621 2979678 cache.go:57] Caching tarball of preloaded images
	I0914 23:12:20.785730 2979678 preload.go:174] Found /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 23:12:20.785740 2979678 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 23:12:20.785847 2979678 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kubernetes-upgrade-448798/config.json ...
	I0914 23:12:20.786081 2979678 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 23:12:20.807297 2979678 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon, skipping pull
	I0914 23:12:20.807319 2979678 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in daemon, skipping load
	I0914 23:12:20.807337 2979678 cache.go:195] Successfully downloaded all kic artifacts
	I0914 23:12:20.807371 2979678 start.go:365] acquiring machines lock for kubernetes-upgrade-448798: {Name:mkf1993d57fbfdbe615a60ad42f9466d81ee426b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 23:12:20.807433 2979678 start.go:369] acquired machines lock for "kubernetes-upgrade-448798" in 39.065µs
	I0914 23:12:20.807453 2979678 start.go:96] Skipping create...Using existing machine configuration
	I0914 23:12:20.807458 2979678 fix.go:54] fixHost starting: 
	I0914 23:12:20.807737 2979678 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-448798 --format={{.State.Status}}
	I0914 23:12:20.826727 2979678 fix.go:102] recreateIfNeeded on kubernetes-upgrade-448798: state=Running err=<nil>
	W0914 23:12:20.826764 2979678 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 23:12:20.829108 2979678 out.go:177] * Updating the running docker "kubernetes-upgrade-448798" container ...
	
	* 
	* ==> CRI-O <==
	* Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.241146077Z" level=info msg="Creating container: kube-system/kindnet-rw9vg/kindnet-cni" id=29d23272-a7ab-4496-80b6-d1134a448294 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.241199517Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.250554999Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f2d9bb4c822f595dd7234bc4c8e437e194b11674572011de73a6695af0479dcd/merged/etc/passwd: no such file or directory"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.250755762Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2d9bb4c822f595dd7234bc4c8e437e194b11674572011de73a6695af0479dcd/merged/etc/group: no such file or directory"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.473400860Z" level=info msg="Created container b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd: kube-system/kindnet-rw9vg/kindnet-cni" id=29d23272-a7ab-4496-80b6-d1134a448294 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.474206234Z" level=info msg="Starting container: b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd" id=3dabfb32-bc8e-40e4-8b29-e52a75d503a1 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.496349901Z" level=info msg="Started container" PID=3602 containerID=b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd description=kube-system/kindnet-rw9vg/kindnet-cni id=3dabfb32-bc8e-40e4-8b29-e52a75d503a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4efca5701c8659f9d6d0ed03cc5a55bcf0de0b2a7eef3ffb2e26abcd585b7bcd
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.553497249Z" level=info msg="Created container b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35: kube-system/coredns-5dd5756b68-fsjl2/coredns" id=2fd18101-61a7-4d4c-ab5e-4318c70926e1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.554166409Z" level=info msg="Starting container: b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35" id=567e5f5e-346f-45d3-bcd2-f230dee12dd3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.570937255Z" level=info msg="Started container" PID=3600 containerID=b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35 description=kube-system/coredns-5dd5756b68-fsjl2/coredns id=567e5f5e-346f-45d3-bcd2-f230dee12dd3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2edc7f20ed3d342b17ce93090a09bad764b7bd98bd63c25e83bf030983c6f1f7
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.612923463Z" level=info msg="Created container 5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c: kube-system/kube-proxy-lprwg/kube-proxy" id=cef7af05-14b9-4cb6-b45a-843c12c3cfe5 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.613440705Z" level=info msg="Starting container: 5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c" id=0449c71c-b81b-4e6c-b478-121c64f6c8e0 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.649250030Z" level=info msg="Started container" PID=3593 containerID=5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c description=kube-system/kube-proxy-lprwg/kube-proxy id=0449c71c-b81b-4e6c-b478-121c64f6c8e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10108216db33f60880a26371aabcf6d385af82e24d78d1ca21091ccf1bba5b8b
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.905076492Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933360021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933393440Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.933409399Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992937914Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992974624Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:04 pause-188837 crio[2615]: time="2023-09-14 23:12:04.992993685Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004911151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004955425Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.004972804Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.025493235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 23:12:05 pause-188837 crio[2615]: time="2023-09-14 23:12:05.025531405Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5c69b06d3e8f8       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   21 seconds ago       Running             kube-proxy                2                   10108216db33f       kube-proxy-lprwg
	b5e6210b023d0       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   21 seconds ago       Running             kindnet-cni               2                   4efca5701c865       kindnet-rw9vg
	b6db342357f51       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   21 seconds ago       Running             coredns                   2                   2edc7f20ed3d3       coredns-5dd5756b68-fsjl2
	b803eeba32c17       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   29 seconds ago       Running             kube-apiserver            3                   a5fbc85f79339       kube-apiserver-pause-188837
	524098e08c3aa       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   29 seconds ago       Running             kube-scheduler            3                   81f238f5cfc9e       kube-scheduler-pause-188837
	14a3a0c1b0196       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   29 seconds ago       Running             kube-controller-manager   3                   a0d11b5b50a01       kube-controller-manager-pause-188837
	d2cbf1c641eee       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   29 seconds ago       Running             etcd                      3                   08ee086b60e39       etcd-pause-188837
	ea7a90d540ab0       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   52 seconds ago       Exited              kube-apiserver            2                   a5fbc85f79339       kube-apiserver-pause-188837
	af88552a2fe0e       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   54 seconds ago       Exited              kube-controller-manager   2                   a0d11b5b50a01       kube-controller-manager-pause-188837
	cff2edb1f640f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   55 seconds ago       Exited              etcd                      2                   08ee086b60e39       etcd-pause-188837
	7817de3dddb7f       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   59 seconds ago       Exited              kube-scheduler            2                   81f238f5cfc9e       kube-scheduler-pause-188837
	3096294cf9ef2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Exited              coredns                   1                   2edc7f20ed3d3       coredns-5dd5756b68-fsjl2
	b148ea09494ca       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Exited              kindnet-cni               1                   4efca5701c865       kindnet-rw9vg
	1ee06b5330745       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   About a minute ago   Exited              kube-proxy                1                   10108216db33f       kube-proxy-lprwg
	
	* 
	* ==> coredns [3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55835 - 33502 "HINFO IN 8412816801247314877.2926992871546622061. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013239459s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b6db342357f510417f8fda90f583c6b0202c92c112c3cbada8d873265aaaaf35] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33469 - 33770 "HINFO IN 8451479171937443088.8887657869746167443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024076293s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-188837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-188837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7492f2ae2d9b6e62b385ffcd97ebad62c645e82
	                    minikube.k8s.io/name=pause-188837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T23_10_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 23:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-188837
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 23:12:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 23:12:03 +0000   Thu, 14 Sep 2023 23:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-188837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b67473cea5640c1a9697609a1d1def3
	  System UUID:                4ea1df65-a4c0-4fb2-b563-32e18331094b
	  Boot ID:                    370886c1-a939-4b15-8117-498126d3502e
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-fsjl2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     90s
	  kube-system                 etcd-pause-188837                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-rw9vg                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      90s
	  kube-system                 kube-apiserver-pause-188837             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-pause-188837    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-lprwg                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-pause-188837             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           91s                  node-controller  Node pause-188837 event: Registered Node pause-188837 in Controller
	  Normal  NodeReady                86s                  kubelet          Node pause-188837 status is now: NodeReady
	  Normal  Starting                 48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)    kubelet          Node pause-188837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)    kubelet          Node pause-188837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)    kubelet          Node pause-188837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                  node-controller  Node pause-188837 event: Registered Node pause-188837 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001074] FS-Cache: O-key=[8] '85703b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
	[  +0.000899] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000040a297ab
	[  +0.001017] FS-Cache: N-key=[8] '85703b0000000000'
	[  +2.012590] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=0000000000e476c3
	[  +0.001056] FS-Cache: O-key=[8] '84703b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=00000000e4905bc3
	[  +0.001024] FS-Cache: N-key=[8] '84703b0000000000'
	[  +0.406786] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000358e642b{9p.inode} n=000000007a274cdd
	[  +0.001021] FS-Cache: O-key=[8] '8a703b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
	[  +0.000918] FS-Cache: N-cookie d=00000000358e642b{9p.inode} n=0000000038968ff8
	[  +0.001006] FS-Cache: N-key=[8] '8a703b0000000000'
	[  +4.128718] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=000000ea [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000fe6607cc{9P.session} n=000000001f02128f
	[  +0.001183] FS-Cache: O-key=[10] '34333134393838363731'
	[  +0.000776] FS-Cache: N-cookie c=000000eb [p=00000002 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=00000000fe6607cc{9P.session} n=00000000648dde5c
	[  +0.001093] FS-Cache: N-key=[10] '34333134393838363731'
	
	* 
	* ==> etcd [cff2edb1f640fe1f42767a20c1ea692f296328f86b24187ba5993d5026d95092] <==
	* {"level":"info","ts":"2023-09-14T23:11:30.533383Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T23:11:32.423508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-09-14T23:11:32.423602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.423638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:32.424715Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-188837 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T23:11:32.424729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:32.424863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:32.425817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-09-14T23:11:32.425846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T23:11:32.426053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:32.42607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:33.621686Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T23:11:33.621733Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-188837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2023-09-14T23:11:33.621823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.621848Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.623456Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T23:11:33.623528Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T23:11:33.623629Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-09-14T23:11:33.62603Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:33.626192Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:33.626228Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-188837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [d2cbf1c641eee0609e11c54628c393fb943c1d87046116ab12815a085d6b78a2] <==
	* {"level":"info","ts":"2023-09-14T23:11:56.044774Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.046846Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.046905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T23:11:56.045052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-09-14T23:11:56.047117Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-09-14T23:11:56.04725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T23:11:56.047307Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T23:11:56.045134Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:56.055026Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-09-14T23:11:56.055704Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T23:11:56.055771Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T23:11:57.514532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-09-14T23:11:57.514775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.514807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.514847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.51488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-09-14T23:11:57.528725Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-188837 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T23:11:57.528911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:57.529096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:57.529126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T23:11:57.529146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T23:11:57.532626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T23:11:57.538901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	* 
	* ==> kernel <==
	*  23:12:26 up 22:54,  0 users,  load average: 3.18, 2.97, 2.21
	Linux pause-188837 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b] <==
	* I0914 23:11:11.810897       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 23:11:11.810956       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0914 23:11:11.811152       1 main.go:116] setting mtu 1500 for CNI 
	I0914 23:11:11.811169       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 23:11:11.811181       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kindnet [b5e6210b023d044137b0469dda76f1850a51b23f74c32da9838e9a715edbc5fd] <==
	* I0914 23:12:04.574898       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0914 23:12:04.574968       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0914 23:12:04.575103       1 main.go:116] setting mtu 1500 for CNI 
	I0914 23:12:04.575113       1 main.go:146] kindnetd IP family: "ipv4"
	I0914 23:12:04.575123       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 23:12:04.904750       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0914 23:12:04.904784       1 main.go:227] handling current node
	I0914 23:12:14.989076       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0914 23:12:14.989110       1 main.go:227] handling current node
	I0914 23:12:25.002475       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0914 23:12:25.005468       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b803eeba32c1724d92ade8d79224f2c1787cdd4b66a763100e569e000e33eab4] <==
	* I0914 23:12:02.836069       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:12:02.860770       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0914 23:12:02.864540       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0914 23:12:02.864951       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:12:02.872185       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 23:12:03.266814       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 23:12:03.272811       1 aggregator.go:166] initial CRD sync complete...
	I0914 23:12:03.272899       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 23:12:03.272930       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 23:12:03.272966       1 cache.go:39] Caches are synced for autoregister controller
	I0914 23:12:03.279557       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 23:12:03.279751       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 23:12:03.324333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 23:12:03.325597       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 23:12:03.330382       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 23:12:03.332798       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 23:12:03.335339       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 23:12:03.335741       1 shared_informer.go:318] Caches are synced for configmaps
	E0914 23:12:03.358347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 23:12:03.832135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 23:12:06.318375       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 23:12:06.541097       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 23:12:06.563296       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 23:12:06.659752       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 23:12:06.680128       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [ea7a90d540ab08aa09e91b154fed113e2850dad3a72d2325a74330e9fcd8a247] <==
	* W0914 23:11:49.008192       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 23:11:51.978476       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 23:11:53.070246       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0914 23:11:54.463871       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-controller-manager [14a3a0c1b0196eb3e57d78ac2adf9207f6ac70707f42f6ce0c124a7cc2b4c586] <==
	* I0914 23:12:15.707325       1 shared_informer.go:318] Caches are synced for TTL
	I0914 23:12:15.712463       1 shared_informer.go:318] Caches are synced for namespace
	I0914 23:12:15.715510       1 shared_informer.go:318] Caches are synced for service account
	I0914 23:12:15.715543       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0914 23:12:15.715951       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 23:12:15.716012       1 shared_informer.go:318] Caches are synced for persistent volume
	I0914 23:12:15.715916       1 shared_informer.go:318] Caches are synced for GC
	I0914 23:12:15.715929       1 shared_informer.go:318] Caches are synced for expand
	I0914 23:12:15.716266       1 shared_informer.go:318] Caches are synced for deployment
	I0914 23:12:15.717173       1 shared_informer.go:318] Caches are synced for PV protection
	I0914 23:12:15.717208       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 23:12:15.717236       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 23:12:15.717258       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 23:12:15.717284       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 23:12:15.720258       1 shared_informer.go:318] Caches are synced for disruption
	I0914 23:12:15.786576       1 shared_informer.go:318] Caches are synced for crt configmap
	I0914 23:12:15.806033       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0914 23:12:15.842433       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0914 23:12:15.844940       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 23:12:15.892485       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 23:12:15.897437       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0914 23:12:15.909940       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 23:12:16.216043       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 23:12:16.228381       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 23:12:16.228411       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [af88552a2fe0ec2def6d5fcbc7a8ed3820b2edab71922c453ed4b90c0742a4bd] <==
	* I0914 23:11:31.126705       1 serving.go:348] Generated self-signed cert in-memory
	I0914 23:11:31.802155       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0914 23:11:31.802185       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:11:31.803454       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 23:11:31.803540       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 23:11:31.804426       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0914 23:11:31.804488       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336] <==
	* 
	* 
	* ==> kube-proxy [5c69b06d3e8f8b73e5ebbe350e854e7e8df30c6721f82b6f3fae84778c347c9c] <==
	* I0914 23:12:04.761290       1 server_others.go:69] "Using iptables proxy"
	I0914 23:12:04.861584       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0914 23:12:05.028206       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 23:12:05.123081       1 server_others.go:152] "Using iptables Proxier"
	I0914 23:12:05.123121       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0914 23:12:05.123130       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0914 23:12:05.123250       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 23:12:05.123523       1 server.go:846] "Version info" version="v1.28.1"
	I0914 23:12:05.123538       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:12:05.132706       1 config.go:188] "Starting service config controller"
	I0914 23:12:05.132748       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 23:12:05.132773       1 config.go:97] "Starting endpoint slice config controller"
	I0914 23:12:05.132778       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 23:12:05.133349       1 config.go:315] "Starting node config controller"
	I0914 23:12:05.133356       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 23:12:05.243233       1 shared_informer.go:318] Caches are synced for node config
	I0914 23:12:05.243387       1 shared_informer.go:318] Caches are synced for service config
	I0914 23:12:05.243501       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [524098e08c3aa73c99aceb4637f1484c93251983bf50164e93c1f5a949f8099c] <==
	* I0914 23:12:02.170657       1 serving.go:348] Generated self-signed cert in-memory
	I0914 23:12:04.956086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 23:12:04.956118       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 23:12:04.979944       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 23:12:04.980073       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0914 23:12:04.980103       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0914 23:12:04.980128       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 23:12:04.986502       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 23:12:04.986529       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 23:12:04.986549       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0914 23:12:04.986554       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 23:12:05.082843       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0914 23:12:05.091256       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0914 23:12:05.091316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648] <==
	* E0914 23:11:31.303111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.428962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.429006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.440679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.440767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.743294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.743345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.756058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.756180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:31.946212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:31.946251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.038131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.038172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.113359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.113402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.146208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.146254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.192088       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.192141       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 23:11:32.302076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:32.302123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0914 23:11:33.783567       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0914 23:11:33.784203       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 23:11:33.784279       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 23:11:33.784381       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Sep 14 23:11:55 pause-188837 kubelet[3313]: I0914 23:11:55.672584    3313 scope.go:117] "RemoveContainer" containerID="7817de3dddb7f70040619830ee918074c182e90ca7e8c414285395b02003d648"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: I0914 23:11:55.674169    3313 kubelet_node_status.go:70] "Attempting to register node" node="pause-188837"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: E0914 23:11:55.676550    3313 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-188837"
	Sep 14 23:11:55 pause-188837 kubelet[3313]: E0914 23:11:55.873919    3313 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-188837?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Sep 14 23:11:56 pause-188837 kubelet[3313]: I0914 23:11:56.056786    3313 scope.go:117] "RemoveContainer" containerID="ea7a90d540ab08aa09e91b154fed113e2850dad3a72d2325a74330e9fcd8a247"
	Sep 14 23:11:56 pause-188837 kubelet[3313]: E0914 23:11:56.154844    3313 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-188837.1784e6c601100c99", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-188837", UID:"pause-188837", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-188837"}, FirstTimestamp:time.Date(2023, time.September, 14, 23, 11, 37, 893891225, time.Local), LastTimestamp:time.Da
te(2023, time.September, 14, 23, 11, 37, 893891225, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-188837"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.76.2:8443: connect: connection refused'(may retry after sleeping)
	Sep 14 23:11:57 pause-188837 kubelet[3313]: I0914 23:11:57.278027    3313 kubelet_node_status.go:70] "Attempting to register node" node="pause-188837"
	Sep 14 23:11:58 pause-188837 kubelet[3313]: E0914 23:11:58.158480    3313 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-188837\" not found"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.334391    3313 kubelet_node_status.go:108] "Node was previously registered" node="pause-188837"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.334499    3313 kubelet_node_status.go:73] "Successfully registered node" node="pause-188837"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.337099    3313 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.337963    3313 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.916146    3313 apiserver.go:52] "Watching apiserver"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.924904    3313 topology_manager.go:215] "Topology Admit Handler" podUID="fe2fe062-01ec-4c26-b6d1-c181f2d685ea" podNamespace="kube-system" podName="kindnet-rw9vg"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.925025    3313 topology_manager.go:215] "Topology Admit Handler" podUID="b888ea22-8d29-4c36-a973-02cd1262b1ae" podNamespace="kube-system" podName="kube-proxy-lprwg"
	Sep 14 23:12:03 pause-188837 kubelet[3313]: I0914 23:12:03.925084    3313 topology_manager.go:215] "Topology Admit Handler" podUID="67bad9d6-02e3-402b-b63e-83403a6c00c4" podNamespace="kube-system" podName="coredns-5dd5756b68-fsjl2"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.014264    3313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028694    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b888ea22-8d29-4c36-a973-02cd1262b1ae-xtables-lock\") pod \"kube-proxy-lprwg\" (UID: \"b888ea22-8d29-4c36-a973-02cd1262b1ae\") " pod="kube-system/kube-proxy-lprwg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028751    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-xtables-lock\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028806    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b888ea22-8d29-4c36-a973-02cd1262b1ae-lib-modules\") pod \"kube-proxy-lprwg\" (UID: \"b888ea22-8d29-4c36-a973-02cd1262b1ae\") " pod="kube-system/kube-proxy-lprwg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028854    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-cni-cfg\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.028881    3313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe2fe062-01ec-4c26-b6d1-c181f2d685ea-lib-modules\") pod \"kindnet-rw9vg\" (UID: \"fe2fe062-01ec-4c26-b6d1-c181f2d685ea\") " pod="kube-system/kindnet-rw9vg"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.226107    3313 scope.go:117] "RemoveContainer" containerID="3096294cf9ef2836640c6a63f9f3dfa4e709195bb30bc3eeb3edb0e785df0352"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.227875    3313 scope.go:117] "RemoveContainer" containerID="1ee06b53307457c677083117c9f598f2e47155f0ef6cfb7f16d3da49bc0c7336"
	Sep 14 23:12:04 pause-188837 kubelet[3313]: I0914 23:12:04.228479    3313 scope.go:117] "RemoveContainer" containerID="b148ea09494cac754d9aba5297c8b8139adb0036057ac049874cec2c08d50c9b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-188837 -n pause-188837
helpers_test.go:261: (dbg) Run:  kubectl --context pause-188837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (83.01s)

                                                
                                    

Test pass (261/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.38
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.19
10 TestDownloadOnly/v1.28.1/json-events 10.43
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
22 TestAddons/Setup 166.59
24 TestAddons/parallel/Registry 16.28
26 TestAddons/parallel/InspektorGadget 10.83
27 TestAddons/parallel/MetricsServer 5.84
30 TestAddons/parallel/CSI 48.53
31 TestAddons/parallel/Headlamp 11.58
32 TestAddons/parallel/CloudSpanner 5.7
35 TestAddons/serial/GCPAuth/Namespaces 0.17
36 TestAddons/StoppedEnableDisable 12.4
37 TestCertOptions 34.9
38 TestCertExpiration 255.44
40 TestForceSystemdFlag 39
41 TestForceSystemdEnv 40.57
47 TestErrorSpam/setup 31.3
48 TestErrorSpam/start 0.86
49 TestErrorSpam/status 1.11
50 TestErrorSpam/pause 1.81
51 TestErrorSpam/unpause 1.86
52 TestErrorSpam/stop 1.48
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 53.05
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.87
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.09
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
64 TestFunctional/serial/CacheCmd/cache/add_local 1.14
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
69 TestFunctional/serial/CacheCmd/cache/delete 0.13
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
72 TestFunctional/serial/ExtraConfig 34.96
73 TestFunctional/serial/ComponentHealth 0.11
74 TestFunctional/serial/LogsCmd 1.77
75 TestFunctional/serial/LogsFileCmd 1.83
76 TestFunctional/serial/InvalidService 4.43
78 TestFunctional/parallel/ConfigCmd 0.46
79 TestFunctional/parallel/DashboardCmd 13.68
80 TestFunctional/parallel/DryRun 0.59
81 TestFunctional/parallel/InternationalLanguage 0.27
82 TestFunctional/parallel/StatusCmd 1.16
86 TestFunctional/parallel/ServiceCmdConnect 10.69
87 TestFunctional/parallel/AddonsCmd 0.2
88 TestFunctional/parallel/PersistentVolumeClaim 25.4
90 TestFunctional/parallel/SSHCmd 0.89
91 TestFunctional/parallel/CpCmd 1.5
93 TestFunctional/parallel/FileSync 0.41
94 TestFunctional/parallel/CertSync 2.46
98 TestFunctional/parallel/NodeLabels 0.1
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
102 TestFunctional/parallel/License 0.31
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
114 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
116 TestFunctional/parallel/ProfileCmd/profile_list 0.41
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
118 TestFunctional/parallel/MountCmd/any-port 7.81
119 TestFunctional/parallel/ServiceCmd/List 0.56
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
122 TestFunctional/parallel/ServiceCmd/Format 0.52
123 TestFunctional/parallel/ServiceCmd/URL 0.44
124 TestFunctional/parallel/MountCmd/specific-port 2.39
125 TestFunctional/parallel/MountCmd/VerifyCleanup 2.93
126 TestFunctional/parallel/Version/short 0.09
127 TestFunctional/parallel/Version/components 0.97
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
133 TestFunctional/parallel/ImageCommands/Setup 1.89
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.26
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.38
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.27
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
144 TestFunctional/delete_addon-resizer_images 0.09
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 100.45
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
157 TestJSONOutput/start/Command 51.14
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.79
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.72
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.88
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 44.11
183 TestKicCustomNetwork/use_default_bridge_network 37.26
184 TestKicExistingNetwork 35.33
185 TestKicCustomSubnet 33.88
186 TestKicStaticIP 34.66
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 68.25
191 TestMountStart/serial/StartWithMountFirst 10.37
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 7.03
194 TestMountStart/serial/VerifyMountSecond 0.3
195 TestMountStart/serial/DeleteFirst 1.66
196 TestMountStart/serial/VerifyMountPostDelete 0.27
197 TestMountStart/serial/Stop 1.22
198 TestMountStart/serial/RestartStopped 8.25
199 TestMountStart/serial/VerifyMountPostStop 0.29
202 TestMultiNode/serial/FreshStart2Nodes 70.61
203 TestMultiNode/serial/DeployApp2Nodes 5.68
205 TestMultiNode/serial/AddNode 20.97
206 TestMultiNode/serial/ProfileList 0.35
207 TestMultiNode/serial/CopyFile 10.86
208 TestMultiNode/serial/StopNode 2.35
209 TestMultiNode/serial/StartAfterStop 13.11
210 TestMultiNode/serial/RestartKeepsNodes 124.75
211 TestMultiNode/serial/DeleteNode 5.07
212 TestMultiNode/serial/StopMultiNode 24.05
213 TestMultiNode/serial/RestartMultiNode 82.71
214 TestMultiNode/serial/ValidateNameConflict 33.88
219 TestPreload 181.16
221 TestScheduledStopUnix 110.91
224 TestInsufficientStorage 13.04
227 TestKubernetesUpgrade 388.84
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
231 TestNoKubernetes/serial/StartWithK8s 43.1
232 TestNoKubernetes/serial/StartWithStopK8s 31.91
233 TestNoKubernetes/serial/Start 10.03
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
235 TestNoKubernetes/serial/ProfileList 0.96
236 TestNoKubernetes/serial/Stop 1.26
237 TestNoKubernetes/serial/StartNoArgs 7.45
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
239 TestStoppedBinaryUpgrade/Setup 1.04
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
250 TestPause/serial/Start 55.04
259 TestNetworkPlugins/group/false 5.39
264 TestStartStop/group/old-k8s-version/serial/FirstStart 135.32
265 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
266 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.66
267 TestStartStop/group/old-k8s-version/serial/Stop 12.1
268 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
269 TestStartStop/group/old-k8s-version/serial/SecondStart 442.99
271 TestStartStop/group/no-preload/serial/FirstStart 71.48
272 TestStartStop/group/no-preload/serial/DeployApp 10.46
273 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
274 TestStartStop/group/no-preload/serial/Stop 12.08
275 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
276 TestStartStop/group/no-preload/serial/SecondStart 349.41
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.41
280 TestStartStop/group/old-k8s-version/serial/Pause 3.69
282 TestStartStop/group/embed-certs/serial/FirstStart 55.48
283 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 19.03
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
285 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
286 TestStartStop/group/no-preload/serial/Pause 4.24
288 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.42
289 TestStartStop/group/embed-certs/serial/DeployApp 8.69
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.69
291 TestStartStop/group/embed-certs/serial/Stop 12.36
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
293 TestStartStop/group/embed-certs/serial/SecondStart 622.69
294 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.65
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.73
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 629.38
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
302 TestStartStop/group/embed-certs/serial/Pause 3.77
304 TestStartStop/group/newest-cni/serial/FirstStart 46.93
305 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
306 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.42
308 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.88
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.01
311 TestNetworkPlugins/group/auto/Start 59.86
312 TestStartStop/group/newest-cni/serial/Stop 1.37
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
314 TestStartStop/group/newest-cni/serial/SecondStart 36.66
315 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.49
318 TestStartStop/group/newest-cni/serial/Pause 3.6
319 TestNetworkPlugins/group/kindnet/Start 55.24
320 TestNetworkPlugins/group/auto/KubeletFlags 0.39
321 TestNetworkPlugins/group/auto/NetCatPod 13.4
322 TestNetworkPlugins/group/auto/DNS 0.32
323 TestNetworkPlugins/group/auto/Localhost 0.31
324 TestNetworkPlugins/group/auto/HairPin 0.29
325 TestNetworkPlugins/group/calico/Start 74.66
326 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
327 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
328 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
329 TestNetworkPlugins/group/kindnet/DNS 0.32
330 TestNetworkPlugins/group/kindnet/Localhost 0.21
331 TestNetworkPlugins/group/kindnet/HairPin 0.23
332 TestNetworkPlugins/group/custom-flannel/Start 73.16
333 TestNetworkPlugins/group/calico/ControllerPod 5.07
334 TestNetworkPlugins/group/calico/KubeletFlags 0.52
335 TestNetworkPlugins/group/calico/NetCatPod 12.55
336 TestNetworkPlugins/group/calico/DNS 0.21
337 TestNetworkPlugins/group/calico/Localhost 0.25
338 TestNetworkPlugins/group/calico/HairPin 0.26
339 TestNetworkPlugins/group/enable-default-cni/Start 53.89
340 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
341 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.48
342 TestNetworkPlugins/group/custom-flannel/DNS 0.29
343 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
344 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
345 TestNetworkPlugins/group/flannel/Start 69.53
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.51
348 TestNetworkPlugins/group/enable-default-cni/DNS 0.35
349 TestNetworkPlugins/group/enable-default-cni/Localhost 0.28
350 TestNetworkPlugins/group/enable-default-cni/HairPin 0.28
351 TestNetworkPlugins/group/bridge/Start 87.65
352 TestNetworkPlugins/group/flannel/ControllerPod 5.05
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
354 TestNetworkPlugins/group/flannel/NetCatPod 12.47
355 TestNetworkPlugins/group/flannel/DNS 0.24
356 TestNetworkPlugins/group/flannel/Localhost 0.19
357 TestNetworkPlugins/group/flannel/HairPin 0.18
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
359 TestNetworkPlugins/group/bridge/NetCatPod 10.36
360 TestNetworkPlugins/group/bridge/DNS 0.21
361 TestNetworkPlugins/group/bridge/Localhost 0.19
362 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (10.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-170237 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-170237 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.380280462s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-170237
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-170237: exit status 85 (184.762089ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-170237 | jenkins | v1.31.2 | 14 Sep 23 22:26 UTC |          |
	|         | -p download-only-170237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:26:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:26:40.500327 2846114 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:26:40.500466 2846114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:26:40.500476 2846114 out.go:309] Setting ErrFile to fd 2...
	I0914 22:26:40.500482 2846114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:26:40.500822 2846114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	W0914 22:26:40.500961 2846114 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17243-2840729/.minikube/config/config.json: open /home/jenkins/minikube-integration/17243-2840729/.minikube/config/config.json: no such file or directory
	I0914 22:26:40.501358 2846114 out.go:303] Setting JSON to true
	I0914 22:26:40.502431 2846114 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":79745,"bootTime":1694650655,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:26:40.502496 2846114 start.go:138] virtualization:  
	I0914 22:26:40.505932 2846114 out.go:97] [download-only-170237] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:26:40.508125 2846114 out.go:169] MINIKUBE_LOCATION=17243
	W0914 22:26:40.506145 2846114 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 22:26:40.506218 2846114 notify.go:220] Checking for updates...
	I0914 22:26:40.510386 2846114 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:26:40.512163 2846114 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:26:40.514456 2846114 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:26:40.516830 2846114 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 22:26:40.520323 2846114 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 22:26:40.520614 2846114 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:26:40.544372 2846114 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:26:40.544458 2846114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:26:40.617851 2846114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-09-14 22:26:40.608112312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:26:40.617958 2846114 docker.go:294] overlay module found
	I0914 22:26:40.619916 2846114 out.go:97] Using the docker driver based on user configuration
	I0914 22:26:40.619964 2846114 start.go:298] selected driver: docker
	I0914 22:26:40.619975 2846114 start.go:902] validating driver "docker" against <nil>
	I0914 22:26:40.620088 2846114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:26:40.686568 2846114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-09-14 22:26:40.677025322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:26:40.686728 2846114 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 22:26:40.686986 2846114 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 22:26:40.687151 2846114 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 22:26:40.689554 2846114 out.go:169] Using Docker driver with root privileges
	I0914 22:26:40.691398 2846114 cni.go:84] Creating CNI manager for ""
	I0914 22:26:40.691414 2846114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:26:40.691427 2846114 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 22:26:40.691448 2846114 start_flags.go:321] config:
	{Name:download-only-170237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-170237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:26:40.693773 2846114 out.go:97] Starting control plane node download-only-170237 in cluster download-only-170237
	I0914 22:26:40.693792 2846114 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:26:40.695737 2846114 out.go:97] Pulling base image ...
	I0914 22:26:40.695759 2846114 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:26:40.695927 2846114 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:26:40.713070 2846114 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 to local cache
	I0914 22:26:40.713238 2846114 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local cache directory
	I0914 22:26:40.713332 2846114 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 to local cache
	I0914 22:26:40.767599 2846114 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0914 22:26:40.767625 2846114 cache.go:57] Caching tarball of preloaded images
	I0914 22:26:40.768245 2846114 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:26:40.770492 2846114 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 22:26:40.770511 2846114 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:26:40.897691 2846114 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0914 22:26:46.148892 2846114 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 as a tarball
	I0914 22:26:49.221641 2846114 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:26:49.221774 2846114 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:26:50.237361 2846114 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0914 22:26:50.237766 2846114 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/download-only-170237/config.json ...
	I0914 22:26:50.237800 2846114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/download-only-170237/config.json: {Name:mkc680f3a45f65427bf3622555deb09ae5638de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 22:26:50.238443 2846114 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0914 22:26:50.238639 2846114 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-170237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (10.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-170237 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-170237 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.430323312s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (10.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-170237
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-170237: exit status 85 (73.975432ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-170237 | jenkins | v1.31.2 | 14 Sep 23 22:26 UTC |          |
	|         | -p download-only-170237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-170237 | jenkins | v1.31.2 | 14 Sep 23 22:26 UTC |          |
	|         | -p download-only-170237        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 22:26:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 22:26:51.074178 2846189 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:26:51.074392 2846189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:26:51.074404 2846189 out.go:309] Setting ErrFile to fd 2...
	I0914 22:26:51.074411 2846189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:26:51.074800 2846189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	W0914 22:26:51.075018 2846189 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17243-2840729/.minikube/config/config.json: open /home/jenkins/minikube-integration/17243-2840729/.minikube/config/config.json: no such file or directory
	I0914 22:26:51.075354 2846189 out.go:303] Setting JSON to true
	I0914 22:26:51.076562 2846189 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":79756,"bootTime":1694650655,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:26:51.076659 2846189 start.go:138] virtualization:  
	I0914 22:26:51.088400 2846189 out.go:97] [download-only-170237] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:26:51.109489 2846189 out.go:169] MINIKUBE_LOCATION=17243
	I0914 22:26:51.088820 2846189 notify.go:220] Checking for updates...
	I0914 22:26:51.134897 2846189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:26:51.153535 2846189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:26:51.168700 2846189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:26:51.186953 2846189 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 22:26:51.212619 2846189 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 22:26:51.213167 2846189 config.go:182] Loaded profile config "download-only-170237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0914 22:26:51.213216 2846189 start.go:810] api.Load failed for download-only-170237: filestore "download-only-170237": Docker machine "download-only-170237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 22:26:51.213329 2846189 driver.go:373] Setting default libvirt URI to qemu:///system
	W0914 22:26:51.213359 2846189 start.go:810] api.Load failed for download-only-170237: filestore "download-only-170237": Docker machine "download-only-170237" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 22:26:51.239169 2846189 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:26:51.239247 2846189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:26:51.310183 2846189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-14 22:26:51.299864983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:26:51.310290 2846189 docker.go:294] overlay module found
	I0914 22:26:51.348742 2846189 out.go:97] Using the docker driver based on existing profile
	I0914 22:26:51.348820 2846189 start.go:298] selected driver: docker
	I0914 22:26:51.348828 2846189 start.go:902] validating driver "docker" against &{Name:download-only-170237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-170237 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:26:51.349020 2846189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:26:51.418193 2846189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-14 22:26:51.40817274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:26:51.418632 2846189 cni.go:84] Creating CNI manager for ""
	I0914 22:26:51.418648 2846189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 22:26:51.418659 2846189 start_flags.go:321] config:
	{Name:download-only-170237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-170237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:26:51.444308 2846189 out.go:97] Starting control plane node download-only-170237 in cluster download-only-170237
	I0914 22:26:51.444356 2846189 cache.go:122] Beginning downloading kic base image for docker with crio
	I0914 22:26:51.477158 2846189 out.go:97] Pulling base image ...
	I0914 22:26:51.477206 2846189 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:26:51.477406 2846189 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local docker daemon
	I0914 22:26:51.494926 2846189 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 to local cache
	I0914 22:26:51.495102 2846189 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local cache directory
	I0914 22:26:51.495125 2846189 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 in local cache directory, skipping pull
	I0914 22:26:51.495133 2846189 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 exists in cache, skipping pull
	I0914 22:26:51.495141 2846189 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 as a tarball
	I0914 22:26:51.539069 2846189 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 22:26:51.539091 2846189 cache.go:57] Caching tarball of preloaded images
	I0914 22:26:51.539254 2846189 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:26:51.572912 2846189 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0914 22:26:51.572941 2846189 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:26:51.689746 2846189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:44f3d096b9be2c2ed42e6b0d364bc859 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0914 22:26:59.942648 2846189 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:26:59.942760 2846189 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 ...
	I0914 22:27:00.862224 2846189 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0914 22:27:00.862353 2846189 profile.go:148] Saving config to /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/download-only-170237/config.json ...
	I0914 22:27:00.862566 2846189 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0914 22:27:00.862768 2846189 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17243-2840729/.minikube/cache/linux/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-170237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-170237
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-374730 --alsologtostderr --binary-mirror http://127.0.0.1:35183 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-374730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-374730
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/Setup (166.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-909789 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-909789 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m46.587765349s)
--- PASS: TestAddons/Setup (166.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 59.922224ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-h7plr" [1b440547-fa9f-4c34-b301-34c86b1393ca] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019236025s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cwb2j" [ac0e755d-982d-4449-825f-57a34c959a00] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014736685s
addons_test.go:316: (dbg) Run:  kubectl --context addons-909789 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-909789 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-909789 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.006315709s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 ip
2023/09/14 22:30:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n2qhf" [3d471135-04a6-46ed-b446-f65b24af5736] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017553638s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-909789
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-909789: (5.812076517s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 7.945092ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dbcdr" [317fd3cf-10c3-4c25-a011-9d2e417c4901] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015904405s
addons_test.go:391: (dbg) Run:  kubectl --context addons-909789 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 10.293585ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-909789 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-909789 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bf733498-a748-4b90-b237-83f340e1f94f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bf733498-a748-4b90-b237-83f340e1f94f] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.012085464s
addons_test.go:560: (dbg) Run:  kubectl --context addons-909789 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-909789 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-909789 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-909789 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-909789 delete pod task-pv-pod: (1.196617875s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-909789 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-909789 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-909789 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-909789 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ea2c0e93-de75-45e7-9097-a6031e6f6ece] Pending
helpers_test.go:344: "task-pv-pod-restore" [ea2c0e93-de75-45e7-9097-a6031e6f6ece] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ea2c0e93-de75-45e7-9097-a6031e6f6ece] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.01920372s
addons_test.go:602: (dbg) Run:  kubectl --context addons-909789 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-909789 delete pod task-pv-pod-restore: (1.081570597s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-909789 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-909789 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-909789 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.785278433s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-909789 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-909789 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-909789 --alsologtostderr -v=1: (1.537982525s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-c52mw" [a31a3196-ee4b-4209-a834-d077a978b7ec] Pending
helpers_test.go:344: "headlamp-699c48fb74-c52mw" [a31a3196-ee4b-4209-a834-d077a978b7ec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-c52mw" [a31a3196-ee4b-4209-a834-d077a978b7ec] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.042686164s
--- PASS: TestAddons/parallel/Headlamp (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-mkfsv" [42aacd55-de75-4f46-9889-24c49e636f57] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011725214s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-909789
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-909789 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-909789 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-909789
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-909789: (12.11525554s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-909789
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-909789
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-909789
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (34.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-061358 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-061358 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.155851339s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-061358 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-061358 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-061358 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-061358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-061358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-061358: (2.039138212s)
--- PASS: TestCertOptions (34.90s)

                                                
                                    
x
+
TestCertExpiration (255.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-266662 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-266662 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.909379062s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-266662 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-266662 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.878395213s)
helpers_test.go:175: Cleaning up "cert-expiration-266662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-266662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-266662: (2.651469214s)
--- PASS: TestCertExpiration (255.44s)

                                                
                                    
x
+
TestForceSystemdFlag (39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-623081 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-623081 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.987412115s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-623081 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-623081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-623081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-623081: (2.620240182s)
--- PASS: TestForceSystemdFlag (39.00s)

                                                
                                    
x
+
TestForceSystemdEnv (40.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-948800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-948800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.843779302s)
helpers_test.go:175: Cleaning up "force-systemd-env-948800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-948800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-948800: (2.724099582s)
--- PASS: TestForceSystemdEnv (40.57s)

                                                
                                    
x
+
TestErrorSpam/setup (31.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-555199 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-555199 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-555199 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-555199 --driver=docker  --container-runtime=crio: (31.295439488s)
--- PASS: TestErrorSpam/setup (31.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 stop: (1.281128951s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555199 --log_dir /tmp/nospam-555199 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17243-2840729/.minikube/files/etc/test/nested/copy/2846109/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0914 22:34:49.832611 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:49.838217 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:49.848435 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:49.868651 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:49.908889 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:49.989236 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:50.149677 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:50.470009 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:51.110710 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:52.390909 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:34:54.951440 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:35:00.071624 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-127648 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (53.046820114s)
--- PASS: TestFunctional/serial/StartWithProxy (53.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --alsologtostderr -v=8
E0914 22:35:10.312582 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 22:35:30.792847 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-127648 --alsologtostderr -v=8: (42.871114516s)
functional_test.go:659: soft start took 42.871656119s for "functional-127648" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-127648 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:3.1: (1.421431421s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:3.3: (1.441927546s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 cache add registry.k8s.io/pause:latest: (1.316902426s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-127648 /tmp/TestFunctionalserialCacheCmdcacheadd_local2310885434/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache add minikube-local-cache-test:functional-127648
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache delete minikube-local-cache-test:functional-127648
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-127648
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (324.951909ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 cache reload: (1.176715587s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 kubectl -- --context functional-127648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-127648 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0914 22:36:11.753704 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-127648 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.956569962s)
functional_test.go:757: restart took 34.956663649s for "functional-127648" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-127648 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 logs: (1.772777655s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 logs --file /tmp/TestFunctionalserialLogsFileCmd4244938398/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 logs --file /tmp/TestFunctionalserialLogsFileCmd4244938398/001/logs.txt: (1.831188343s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-127648 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-127648
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-127648: exit status 115 (552.535944ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30736 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-127648 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 config get cpus: exit status 14 (89.742617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 config get cpus: exit status 14 (72.891652ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-127648 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-127648 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2870971: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-127648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (284.501574ms)

                                                
                                                
-- stdout --
	* [functional-127648] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:37:14.246743 2870602 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:37:14.246967 2870602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:14.246993 2870602 out.go:309] Setting ErrFile to fd 2...
	I0914 22:37:14.247014 2870602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:14.247338 2870602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:37:14.247775 2870602 out.go:303] Setting JSON to false
	I0914 22:37:14.249093 2870602 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":80379,"bootTime":1694650655,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:37:14.249195 2870602 start.go:138] virtualization:  
	I0914 22:37:14.251889 2870602 out.go:177] * [functional-127648] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 22:37:14.254594 2870602 notify.go:220] Checking for updates...
	I0914 22:37:14.254520 2870602 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:37:14.258010 2870602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:37:14.263065 2870602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:37:14.265562 2870602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:37:14.268179 2870602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:37:14.270316 2870602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:37:14.273506 2870602 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:37:14.274661 2870602 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:37:14.313912 2870602 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:37:14.314007 2870602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:14.434629 2870602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-14 22:37:14.42403455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:14.434729 2870602 docker.go:294] overlay module found
	I0914 22:37:14.441137 2870602 out.go:177] * Using the docker driver based on existing profile
	I0914 22:37:14.443741 2870602 start.go:298] selected driver: docker
	I0914 22:37:14.443763 2870602 start.go:902] validating driver "docker" against &{Name:functional-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-127648 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:37:14.443879 2870602 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:37:14.446202 2870602 out.go:177] 
	W0914 22:37:14.448216 2870602 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 22:37:14.450130 2870602 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-127648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-127648 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (267.301809ms)

                                                
                                                
-- stdout --
	* [functional-127648] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:37:13.963844 2870554 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:37:13.963999 2870554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:13.964007 2870554 out.go:309] Setting ErrFile to fd 2...
	I0914 22:37:13.964013 2870554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:37:13.964361 2870554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:37:13.964900 2870554 out.go:303] Setting JSON to false
	I0914 22:37:13.966218 2870554 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":80379,"bootTime":1694650655,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 22:37:13.966324 2870554 start.go:138] virtualization:  
	I0914 22:37:13.968846 2870554 out.go:177] * [functional-127648] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0914 22:37:13.970731 2870554 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 22:37:13.970928 2870554 notify.go:220] Checking for updates...
	I0914 22:37:13.972531 2870554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 22:37:13.975175 2870554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 22:37:13.976973 2870554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 22:37:13.978873 2870554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 22:37:13.980840 2870554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 22:37:13.983226 2870554 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:37:13.983812 2870554 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 22:37:14.022182 2870554 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 22:37:14.022282 2870554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:37:14.154633 2870554 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-14 22:37:14.140600151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:37:14.154739 2870554 docker.go:294] overlay module found
	I0914 22:37:14.157212 2870554 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 22:37:14.159339 2870554 start.go:298] selected driver: docker
	I0914 22:37:14.159359 2870554 start.go:902] validating driver "docker" against &{Name:functional-127648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-127648 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 22:37:14.159468 2870554 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 22:37:14.161847 2870554 out.go:177] 
	W0914 22:37:14.164000 2870554 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 22:37:14.165836 2870554 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-127648 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-127648 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qjlrr" [649df819-d075-4c4e-93dc-bd0833612cce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qjlrr" [649df819-d075-4c4e-93dc-bd0833612cce] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.019508508s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30945
functional_test.go:1674: http://192.168.49.2:30945: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-qjlrr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30945
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [713dc5ba-ece0-461f-bb9b-2cae1bb7ee9f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.050142598s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-127648 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-127648 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-127648 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-127648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d0965d5f-441e-4034-bc97-18d48ba78aa5] Pending
helpers_test.go:344: "sp-pod" [d0965d5f-441e-4034-bc97-18d48ba78aa5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d0965d5f-441e-4034-bc97-18d48ba78aa5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.016695833s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-127648 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-127648 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-127648 delete -f testdata/storage-provisioner/pod.yaml: (1.111757793s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-127648 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ae00d845-1f24-4726-b0c3-92f3ec1a8bc7] Pending
helpers_test.go:344: "sp-pod" [ae00d845-1f24-4726-b0c3-92f3ec1a8bc7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.011901383s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-127648 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh -n functional-127648 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 cp functional-127648:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd674725946/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh -n functional-127648 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2846109/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /etc/test/nested/copy/2846109/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2846109.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /etc/ssl/certs/2846109.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2846109.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /usr/share/ca-certificates/2846109.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/28461092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /etc/ssl/certs/28461092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/28461092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /usr/share/ca-certificates/28461092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-127648 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "sudo systemctl is-active docker": exit status 1 (370.680848ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "sudo systemctl is-active containerd": exit status 1 (420.688844ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2868626: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-127648 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2b616b96-2fb6-47e1-97c7-35979f3a56df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2b616b96-2fb6-47e1-97c7-35979f3a56df] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.027278646s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-127648 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.209.37 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-127648 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-127648 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-127648 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-kthzn" [45668263-3e62-4d05-8d0a-3f3a524a5213] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-kthzn" [45668263-3e62-4d05-8d0a-3f3a524a5213] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.015506555s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "352.667929ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "56.593704ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "348.084261ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "56.338131ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdany-port1916340695/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694731029557952080" to /tmp/TestFunctionalparallelMountCmdany-port1916340695/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694731029557952080" to /tmp/TestFunctionalparallelMountCmdany-port1916340695/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694731029557952080" to /tmp/TestFunctionalparallelMountCmdany-port1916340695/001/test-1694731029557952080
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (394.268274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 22:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 22:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 22:37 test-1694731029557952080
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh cat /mount-9p/test-1694731029557952080
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-127648 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6698fcab-a9f8-438d-8251-2cd275c5c40b] Pending
helpers_test.go:344: "busybox-mount" [6698fcab-a9f8-438d-8251-2cd275c5c40b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6698fcab-a9f8-438d-8251-2cd275c5c40b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6698fcab-a9f8-438d-8251-2cd275c5c40b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.018988505s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-127648 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdany-port1916340695/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service list -o json
functional_test.go:1493: Took "664.833202ms" to run "out/minikube-linux-arm64 -p functional-127648 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30635
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30635
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdspecific-port1691688255/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (426.607409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdspecific-port1691688255/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "sudo umount -f /mount-9p": exit status 1 (386.245981ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-127648 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdspecific-port1691688255/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T" /mount1: exit status 1 (1.259691457s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-127648 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-127648 /tmp/TestFunctionalparallelMountCmdVerifyCleanup626617753/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 version -o=json --components
E0914 22:37:33.674517 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-127648 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-127648
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-127648 image ls --format short --alsologtostderr:
I0914 22:37:44.362699 2873378 out.go:296] Setting OutFile to fd 1 ...
I0914 22:37:44.362988 2873378 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.363019 2873378 out.go:309] Setting ErrFile to fd 2...
I0914 22:37:44.363039 2873378 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.363341 2873378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
I0914 22:37:44.364157 2873378 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.364341 2873378 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.365019 2873378 cli_runner.go:164] Run: docker container inspect functional-127648 --format={{.State.Status}}
I0914 22:37:44.393104 2873378 ssh_runner.go:195] Run: systemctl --version
I0914 22:37:44.393156 2873378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-127648
I0914 22:37:44.415563 2873378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36398 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/functional-127648/id_rsa Username:docker}
I0914 22:37:44.526828 2873378 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-127648 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | fa0c6bb795403 | 45.3MB |
| gcr.io/google-containers/addon-resizer  | functional-127648  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 8b6e1980b7584 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.1            | 812f5241df7fd | 69.9MB |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b4a5a57e99492 | 59.2MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | latest             | 91582cfffc2d0 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.28.1            | b29fb62480892 | 121MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-127648 image ls --format table --alsologtostderr:
I0914 22:37:44.982793 2873514 out.go:296] Setting OutFile to fd 1 ...
I0914 22:37:44.983061 2873514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.983088 2873514 out.go:309] Setting ErrFile to fd 2...
I0914 22:37:44.983108 2873514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.983412 2873514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
I0914 22:37:44.984186 2873514 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.984407 2873514 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.984981 2873514 cli_runner.go:164] Run: docker container inspect functional-127648 --format={{.State.Status}}
I0914 22:37:45.014980 2873514 ssh_runner.go:195] Run: systemctl --version
I0914 22:37:45.015039 2873514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-127648
I0914 22:37:45.041366 2873514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36398 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/functional-127648/id_rsa Username:docker}
I0914 22:37:45.142542 2873514 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-127648 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"120857550"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"91582cfffc2d0daa6f42adb6
fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:85eabf2757cb5b5b84248d7feb019079501dfd8691fc79b8b1d0ff1591a6270b"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196618"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d","registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["registry.k8s.io/k
ube-scheduler:v1.28.1"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":["docker.io/library/nginx@sha256:1616
4a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45265718"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"117187378"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id
":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":["registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c","registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"69926807"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/bus
ybox:1.28.4-glibc"],"size":"3774172"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d84
9d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-127648"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-127648 image ls --format json --alsologtostderr:
I0914 22:37:44.693181 2873441 out.go:296] Setting OutFile to fd 1 ...
I0914 22:37:44.693449 2873441 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.693478 2873441 out.go:309] Setting ErrFile to fd 2...
I0914 22:37:44.693498 2873441 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.693843 2873441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
I0914 22:37:44.694584 2873441 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.694757 2873441 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.695315 2873441 cli_runner.go:164] Run: docker container inspect functional-127648 --format={{.State.Status}}
I0914 22:37:44.720361 2873441 ssh_runner.go:195] Run: systemctl --version
I0914 22:37:44.720413 2873441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-127648
I0914 22:37:44.745932 2873441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36398 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/functional-127648/id_rsa Username:docker}
I0914 22:37:44.854928 2873441 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-127648 image ls --format yaml --alsologtostderr:
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "59188020"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "120857550"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
- registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "69926807"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-127648
size: "34114467"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "117187378"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790
repoTags:
- docker.io/library/nginx:alpine
size: "45265718"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:85eabf2757cb5b5b84248d7feb019079501dfd8691fc79b8b1d0ff1591a6270b
repoTags:
- docker.io/library/nginx:latest
size: "196196618"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-127648 image ls --format yaml --alsologtostderr:
I0914 22:37:44.363475 2873379 out.go:296] Setting OutFile to fd 1 ...
I0914 22:37:44.363696 2873379 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.363707 2873379 out.go:309] Setting ErrFile to fd 2...
I0914 22:37:44.363713 2873379 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:44.363982 2873379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
I0914 22:37:44.364619 2873379 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.364751 2873379 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:44.365250 2873379 cli_runner.go:164] Run: docker container inspect functional-127648 --format={{.State.Status}}
I0914 22:37:44.385343 2873379 ssh_runner.go:195] Run: systemctl --version
I0914 22:37:44.385394 2873379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-127648
I0914 22:37:44.413230 2873379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36398 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/functional-127648/id_rsa Username:docker}
I0914 22:37:44.514365 2873379 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-127648 ssh pgrep buildkitd: exit status 1 (378.805753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image build -t localhost/my-image:functional-127648 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 image build -t localhost/my-image:functional-127648 testdata/build --alsologtostderr: (2.300547136s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-127648 image build -t localhost/my-image:functional-127648 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 54bb1753e59
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-127648
--> c09d3eee2eb
Successfully tagged localhost/my-image:functional-127648
c09d3eee2eb0aaef98f365ef47a9deeba9d99539087cc299d49caf046e2c1cd9
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-127648 image build -t localhost/my-image:functional-127648 testdata/build --alsologtostderr:
I0914 22:37:45.033306 2873521 out.go:296] Setting OutFile to fd 1 ...
I0914 22:37:45.034426 2873521 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:45.034442 2873521 out.go:309] Setting ErrFile to fd 2...
I0914 22:37:45.034449 2873521 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 22:37:45.034746 2873521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
I0914 22:37:45.035435 2873521 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:45.036067 2873521 config.go:182] Loaded profile config "functional-127648": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0914 22:37:45.036814 2873521 cli_runner.go:164] Run: docker container inspect functional-127648 --format={{.State.Status}}
I0914 22:37:45.059271 2873521 ssh_runner.go:195] Run: systemctl --version
I0914 22:37:45.059334 2873521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-127648
I0914 22:37:45.085732 2873521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36398 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/functional-127648/id_rsa Username:docker}
I0914 22:37:45.196793 2873521 build_images.go:151] Building image from path: /tmp/build.2014035358.tar
I0914 22:37:45.196858 2873521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 22:37:45.208623 2873521 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2014035358.tar
I0914 22:37:45.213632 2873521 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2014035358.tar: stat -c "%s %y" /var/lib/minikube/build/build.2014035358.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2014035358.tar': No such file or directory
I0914 22:37:45.213675 2873521 ssh_runner.go:362] scp /tmp/build.2014035358.tar --> /var/lib/minikube/build/build.2014035358.tar (3072 bytes)
I0914 22:37:45.243007 2873521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2014035358
I0914 22:37:45.254129 2873521 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2014035358 -xf /var/lib/minikube/build/build.2014035358.tar
I0914 22:37:45.265793 2873521 crio.go:297] Building image: /var/lib/minikube/build/build.2014035358
I0914 22:37:45.265926 2873521 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-127648 /var/lib/minikube/build/build.2014035358 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0914 22:37:47.231049 2873521 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-127648 /var/lib/minikube/build/build.2014035358 --cgroup-manager=cgroupfs: (1.965092932s)
I0914 22:37:47.231117 2873521 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2014035358
I0914 22:37:47.241676 2873521 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2014035358.tar
I0914 22:37:47.253292 2873521 build_images.go:207] Built localhost/my-image:functional-127648 from /tmp/build.2014035358.tar
I0914 22:37:47.253319 2873521 build_images.go:123] succeeded building to: functional-127648
I0914 22:37:47.253324 2873521 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.871921843s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-127648
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr
2023/09/14 22:37:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr: (4.958989094s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr: (3.067974362s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.409080246s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-127648
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 image load --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr: (3.593899017s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image save gcr.io/google-containers/addon-resizer:functional-127648 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image rm gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-127648 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.018421415s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-127648
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-127648 image save --daemon gcr.io/google-containers/addon-resizer:functional-127648 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-127648
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-127648
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-127648
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-127648
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (100.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-438037 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-438037 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m40.451850724s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (100.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-438037 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-058761 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0914 22:47:10.611925 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-058761 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.136032714s)
--- PASS: TestJSONOutput/start/Command (51.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-058761 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-058761 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-058761 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-058761 --output=json --user=testUser: (5.881824398s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-584221 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-584221 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.84568ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9e3e9365-9938-418c-88f4-eeb37d39f46c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-584221] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8323c0e9-be00-4175-93ea-071084f685d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17243"}}
	{"specversion":"1.0","id":"28d1b4e6-2da0-4119-b10e-93ba8be0e28c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"838b655c-79b7-4c29-ac09-d55d04ae9768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig"}}
	{"specversion":"1.0","id":"d36dfb35-b419-4d93-a523-123db68fe9bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube"}}
	{"specversion":"1.0","id":"e6292acc-5889-4b88-b41a-7c2ade2db652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"069b228c-3582-4a7c-9212-2b50a2769c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9fbeb9da-9fe5-4bc2-85be-9207f7deb3b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-584221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-584221
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-191066 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-191066 --network=: (42.022283671s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-191066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-191066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-191066: (2.057532209s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-257628 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-257628 --network=bridge: (35.203776607s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-257628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-257628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-257628: (2.034752883s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.26s)

                                                
                                    
x
+
TestKicExistingNetwork (35.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-417774 --network=existing-network
E0914 22:49:49.832611 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-417774 --network=existing-network: (33.270343839s)
helpers_test.go:175: Cleaning up "existing-network-417774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-417774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-417774: (1.902239786s)
--- PASS: TestKicExistingNetwork (35.33s)

                                                
                                    
x
+
TestKicCustomSubnet (33.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-525006 --subnet=192.168.60.0/24
E0914 22:50:35.167160 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.172403 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.182645 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.202892 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.243125 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.323358 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.483676 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:35.804170 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:36.445259 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:37.725441 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:40.285840 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-525006 --subnet=192.168.60.0/24: (31.79909643s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-525006 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-525006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-525006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-525006: (2.058344046s)
--- PASS: TestKicCustomSubnet (33.88s)

                                                
                                    
x
+
TestKicStaticIP (34.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-660605 --static-ip=192.168.200.200
E0914 22:50:45.406866 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:50:55.647638 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:51:12.877236 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-660605 --static-ip=192.168.200.200: (32.354907032s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-660605 ip
E0914 22:51:16.128586 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "static-ip-660605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-660605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-660605: (2.142945112s)
--- PASS: TestKicStaticIP (34.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-846541 --driver=docker  --container-runtime=crio
E0914 22:51:42.928704 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-846541 --driver=docker  --container-runtime=crio: (30.423651441s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-849093 --driver=docker  --container-runtime=crio
E0914 22:51:57.089021 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-849093 --driver=docker  --container-runtime=crio: (32.269274408s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-846541
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-849093
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-849093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-849093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-849093: (1.98177313s)
helpers_test.go:175: Cleaning up "first-846541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-846541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-846541: (2.316616975s)
--- PASS: TestMinikubeProfile (68.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-536552 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-536552 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.368669125s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-536552 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-538586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-538586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.029327415s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-536552 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-536552 --alsologtostderr -v=5: (1.657027769s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-538586
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-538586: (1.219284849s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-538586
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-538586: (7.246081046s)
--- PASS: TestMountStart/serial/RestartStopped (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-174950 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 22:53:19.009237 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-174950 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m10.048032078s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-174950 -- rollout status deployment/busybox: (3.434130393s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-fkf4t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-174950 -- exec busybox-5bc68d56bd-grlb8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-174950 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-174950 -v 3 --alsologtostderr: (20.241023736s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.97s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp testdata/cp-test.txt multinode-174950:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2384393423/001/cp-test_multinode-174950.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950:/home/docker/cp-test.txt multinode-174950-m02:/home/docker/cp-test_multinode-174950_multinode-174950-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test_multinode-174950_multinode-174950-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950:/home/docker/cp-test.txt multinode-174950-m03:/home/docker/cp-test_multinode-174950_multinode-174950-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test_multinode-174950_multinode-174950-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp testdata/cp-test.txt multinode-174950-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2384393423/001/cp-test_multinode-174950-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m02:/home/docker/cp-test.txt multinode-174950:/home/docker/cp-test_multinode-174950-m02_multinode-174950.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test_multinode-174950-m02_multinode-174950.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m02:/home/docker/cp-test.txt multinode-174950-m03:/home/docker/cp-test_multinode-174950-m02_multinode-174950-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test_multinode-174950-m02_multinode-174950-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp testdata/cp-test.txt multinode-174950-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2384393423/001/cp-test_multinode-174950-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m03:/home/docker/cp-test.txt multinode-174950:/home/docker/cp-test_multinode-174950-m03_multinode-174950.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950 "sudo cat /home/docker/cp-test_multinode-174950-m03_multinode-174950.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 cp multinode-174950-m03:/home/docker/cp-test.txt multinode-174950-m02:/home/docker/cp-test_multinode-174950-m03_multinode-174950-m02.txt
E0914 22:54:49.832669 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 ssh -n multinode-174950-m02 "sudo cat /home/docker/cp-test_multinode-174950-m03_multinode-174950-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-174950 node stop m03: (1.229362334s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-174950 status: exit status 7 (578.109417ms)

                                                
                                                
-- stdout --
	multinode-174950
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-174950-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-174950-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr: exit status 7 (543.866335ms)

                                                
                                                
-- stdout --
	multinode-174950
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-174950-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-174950-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:54:52.602528 2919326 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:54:52.602718 2919326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:54:52.602747 2919326 out.go:309] Setting ErrFile to fd 2...
	I0914 22:54:52.602768 2919326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:54:52.603019 2919326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:54:52.603267 2919326 out.go:303] Setting JSON to false
	I0914 22:54:52.603351 2919326 mustload.go:65] Loading cluster: multinode-174950
	I0914 22:54:52.603438 2919326 notify.go:220] Checking for updates...
	I0914 22:54:52.603826 2919326 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:54:52.603863 2919326 status.go:255] checking status of multinode-174950 ...
	I0914 22:54:52.604424 2919326 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:54:52.623127 2919326 status.go:330] multinode-174950 host status = "Running" (err=<nil>)
	I0914 22:54:52.623218 2919326 host.go:66] Checking if "multinode-174950" exists ...
	I0914 22:54:52.623690 2919326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950
	I0914 22:54:52.642841 2919326 host.go:66] Checking if "multinode-174950" exists ...
	I0914 22:54:52.643215 2919326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:54:52.643264 2919326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950
	I0914 22:54:52.676771 2919326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950/id_rsa Username:docker}
	I0914 22:54:52.775552 2919326 ssh_runner.go:195] Run: systemctl --version
	I0914 22:54:52.781030 2919326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:52.794278 2919326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 22:54:52.863213 2919326 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-14 22:54:52.849234859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 22:54:52.865456 2919326 kubeconfig.go:92] found "multinode-174950" server: "https://192.168.58.2:8443"
	I0914 22:54:52.865496 2919326 api_server.go:166] Checking apiserver status ...
	I0914 22:54:52.865543 2919326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 22:54:52.877886 2919326 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1274/cgroup
	I0914 22:54:52.888945 2919326 api_server.go:182] apiserver freezer: "12:freezer:/docker/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/crio/crio-4efd448e37f7562db5976c76bdebed004efa31c9af3f6e831b23abd27977d285"
	I0914 22:54:52.889013 2919326 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5804e744fc84baf470e6572e7e22ed97634e03253cb719647b3f90dadf424715/crio/crio-4efd448e37f7562db5976c76bdebed004efa31c9af3f6e831b23abd27977d285/freezer.state
	I0914 22:54:52.898631 2919326 api_server.go:204] freezer state: "THAWED"
	I0914 22:54:52.898657 2919326 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0914 22:54:52.907368 2919326 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0914 22:54:52.907394 2919326 status.go:421] multinode-174950 apiserver status = Running (err=<nil>)
	I0914 22:54:52.907405 2919326 status.go:257] multinode-174950 status: &{Name:multinode-174950 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 22:54:52.907431 2919326 status.go:255] checking status of multinode-174950-m02 ...
	I0914 22:54:52.907728 2919326 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Status}}
	I0914 22:54:52.926077 2919326 status.go:330] multinode-174950-m02 host status = "Running" (err=<nil>)
	I0914 22:54:52.926101 2919326 host.go:66] Checking if "multinode-174950-m02" exists ...
	I0914 22:54:52.926395 2919326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-174950-m02
	I0914 22:54:52.944305 2919326 host.go:66] Checking if "multinode-174950-m02" exists ...
	I0914 22:54:52.944680 2919326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 22:54:52.944729 2919326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-174950-m02
	I0914 22:54:52.962186 2919326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36468 SSHKeyPath:/home/jenkins/minikube-integration/17243-2840729/.minikube/machines/multinode-174950-m02/id_rsa Username:docker}
	I0914 22:54:53.062775 2919326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 22:54:53.075881 2919326 status.go:257] multinode-174950-m02 status: &{Name:multinode-174950-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 22:54:53.075913 2919326 status.go:255] checking status of multinode-174950-m03 ...
	I0914 22:54:53.076234 2919326 cli_runner.go:164] Run: docker container inspect multinode-174950-m03 --format={{.State.Status}}
	I0914 22:54:53.093795 2919326 status.go:330] multinode-174950-m03 host status = "Stopped" (err=<nil>)
	I0914 22:54:53.093816 2919326 status.go:343] host is not running, skipping remaining checks
	I0914 22:54:53.093824 2919326 status.go:257] multinode-174950-m03 status: &{Name:multinode-174950-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-174950 node start m03 --alsologtostderr: (12.269366633s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-174950
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-174950
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-174950: (25.02475647s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-174950 --wait=true -v=8 --alsologtostderr
E0914 22:55:35.167294 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:56:02.850008 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 22:56:42.927542 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-174950 --wait=true -v=8 --alsologtostderr: (1m39.594059584s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-174950
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-174950 node delete m03: (4.3205986s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-174950 stop: (23.86941345s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-174950 status: exit status 7 (87.323406ms)

                                                
                                                
-- stdout --
	multinode-174950
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-174950-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr: exit status 7 (94.364124ms)

                                                
                                                
-- stdout --
	multinode-174950
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-174950-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 22:57:40.044617 2927337 out.go:296] Setting OutFile to fd 1 ...
	I0914 22:57:40.044738 2927337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:57:40.044748 2927337 out.go:309] Setting ErrFile to fd 2...
	I0914 22:57:40.044753 2927337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 22:57:40.045020 2927337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 22:57:40.045210 2927337 out.go:303] Setting JSON to false
	I0914 22:57:40.045257 2927337 mustload.go:65] Loading cluster: multinode-174950
	I0914 22:57:40.045374 2927337 notify.go:220] Checking for updates...
	I0914 22:57:40.045654 2927337 config.go:182] Loaded profile config "multinode-174950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 22:57:40.045664 2927337 status.go:255] checking status of multinode-174950 ...
	I0914 22:57:40.046125 2927337 cli_runner.go:164] Run: docker container inspect multinode-174950 --format={{.State.Status}}
	I0914 22:57:40.064936 2927337 status.go:330] multinode-174950 host status = "Stopped" (err=<nil>)
	I0914 22:57:40.064957 2927337 status.go:343] host is not running, skipping remaining checks
	I0914 22:57:40.064964 2927337 status.go:257] multinode-174950 status: &{Name:multinode-174950 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 22:57:40.064990 2927337 status.go:255] checking status of multinode-174950-m02 ...
	I0914 22:57:40.065288 2927337 cli_runner.go:164] Run: docker container inspect multinode-174950-m02 --format={{.State.Status}}
	I0914 22:57:40.083441 2927337 status.go:330] multinode-174950-m02 host status = "Stopped" (err=<nil>)
	I0914 22:57:40.083460 2927337 status.go:343] host is not running, skipping remaining checks
	I0914 22:57:40.083468 2927337 status.go:257] multinode-174950-m02 status: &{Name:multinode-174950-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-174950 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 22:58:05.972890 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-174950 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.947245228s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-174950 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-174950
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-174950-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-174950-m02 --driver=docker  --container-runtime=crio: exit status 14 (89.135745ms)

                                                
                                                
-- stdout --
	* [multinode-174950-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-174950-m02' is duplicated with machine name 'multinode-174950-m02' in profile 'multinode-174950'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-174950-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-174950-m03 --driver=docker  --container-runtime=crio: (31.384927552s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-174950
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-174950: exit status 80 (359.592097ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-174950
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-174950-m03 already exists in multinode-174950-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-174950-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-174950-m03: (1.990125974s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.88s)

                                                
                                    
x
+
TestPreload (181.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-141939 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0914 22:59:49.832477 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 23:00:35.167089 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-141939 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.076993159s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-141939 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-141939 image pull gcr.io/k8s-minikube/busybox: (2.117041357s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-141939
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-141939: (5.860556068s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-141939 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0914 23:01:42.928088 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-141939 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m24.445998821s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-141939 image list
helpers_test.go:175: Cleaning up "test-preload-141939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-141939
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-141939: (2.408705407s)
--- PASS: TestPreload (181.16s)

                                                
                                    
x
+
TestScheduledStopUnix (110.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-770080 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-770080 --memory=2048 --driver=docker  --container-runtime=crio: (33.835369494s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-770080 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-770080 -n scheduled-stop-770080
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-770080 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-770080 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-770080 -n scheduled-stop-770080
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-770080
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-770080 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-770080
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-770080: exit status 7 (72.752587ms)

                                                
                                                
-- stdout --
	scheduled-stop-770080
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-770080 -n scheduled-stop-770080
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-770080 -n scheduled-stop-770080: exit status 7 (73.415988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-770080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-770080
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-770080: (5.413206279s)
--- PASS: TestScheduledStopUnix (110.91s)

                                                
                                    
x
+
TestInsufficientStorage (13.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-727065 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-727065 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.503571712s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"20be2c07-fd75-454d-a305-62820377e6e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-727065] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c7b4a77-d8de-461e-b304-a654b4f7fb36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17243"}}
	{"specversion":"1.0","id":"e83985b6-e3ba-40f6-967d-0681c64013ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99c21350-1e9d-417a-bd6f-e1ac409aed67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig"}}
	{"specversion":"1.0","id":"61bf9b2d-1570-4cbf-bb5e-974bc14b4985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube"}}
	{"specversion":"1.0","id":"d1700bc0-f2f7-40bc-b05c-c0ba0db91511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"84a0c17e-c772-4e89-bc39-ff07e2710f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"357c2e6f-123c-4893-8eb7-52607dd07281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6d0b939f-333d-4ab4-b199-d16c64e85d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"865f3d73-b295-4cf8-8910-17cd9c08e833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e8555b0-aad3-448a-8761-3a2459719ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"42e2a3cd-49b5-433e-a9fd-59b617717b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-727065 in cluster insufficient-storage-727065","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cff59807-f7c9-4e0c-94fc-0648613d3993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a199aacc-5e26-45cc-980e-4f8403a06d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1902cecb-7877-48f9-b765-e24eb424b3d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-727065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-727065 --output=json --layout=cluster: exit status 7 (310.270126ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-727065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-727065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 23:04:45.861501 2944750 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-727065" does not appear in /home/jenkins/minikube-integration/17243-2840729/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-727065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-727065 --output=json --layout=cluster: exit status 7 (316.644958ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-727065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-727065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 23:04:46.180838 2944804 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-727065" does not appear in /home/jenkins/minikube-integration/17243-2840729/kubeconfig
	E0914 23:04:46.192971 2944804 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/insufficient-storage-727065/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-727065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-727065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-727065: (1.912425674s)
--- PASS: TestInsufficientStorage (13.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0914 23:06:42.927554 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.781859304s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-448798
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-448798: (1.39724554s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-448798 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-448798 status --format={{.Host}}: exit status 7 (122.846942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.375211714s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-448798 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (133.606044ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-448798] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-448798
	    minikube start -p kubernetes-upgrade-448798 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4487982 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-448798 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-448798 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.542997717s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-448798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-448798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-448798: (2.341009406s)
--- PASS: TestKubernetesUpgrade (388.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.231997ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-836473] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-836473 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-836473 --driver=docker  --container-runtime=crio: (42.569711018s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-836473 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --driver=docker  --container-runtime=crio
E0914 23:05:35.167503 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --driver=docker  --container-runtime=crio: (29.317795155s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-836473 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-836473 status -o json: exit status 2 (446.738764ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-836473","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-836473
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-836473: (2.142506432s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-836473 --no-kubernetes --driver=docker  --container-runtime=crio: (10.030279689s)
--- PASS: TestNoKubernetes/serial/Start (10.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-836473 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-836473 "sudo systemctl is-active --quiet service kubelet": exit status 1 (403.022196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-836473
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-836473: (1.25521295s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-836473 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-836473 --driver=docker  --container-runtime=crio: (7.45194779s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-836473 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-836473 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.580996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-686061
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestPause/serial/Start (55.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-188837 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0914 23:10:35.167094 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-188837 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.042323638s)
--- PASS: TestPause/serial/Start (55.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-811741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-811741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (323.275983ms)

                                                
                                                
-- stdout --
	* [false-811741] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17243
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 23:13:00.161296 2984356 out.go:296] Setting OutFile to fd 1 ...
	I0914 23:13:00.161564 2984356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:13:00.161577 2984356 out.go:309] Setting ErrFile to fd 2...
	I0914 23:13:00.161584 2984356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 23:13:00.161910 2984356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17243-2840729/.minikube/bin
	I0914 23:13:00.162415 2984356 out.go:303] Setting JSON to false
	I0914 23:13:00.163653 2984356 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":82525,"bootTime":1694650655,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0914 23:13:00.163728 2984356 start.go:138] virtualization:  
	I0914 23:13:00.168277 2984356 out.go:177] * [false-811741] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0914 23:13:00.171193 2984356 out.go:177]   - MINIKUBE_LOCATION=17243
	I0914 23:13:00.171258 2984356 notify.go:220] Checking for updates...
	I0914 23:13:00.174382 2984356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 23:13:00.176055 2984356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17243-2840729/kubeconfig
	I0914 23:13:00.177768 2984356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17243-2840729/.minikube
	I0914 23:13:00.179624 2984356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 23:13:00.181703 2984356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 23:13:00.195417 2984356 config.go:182] Loaded profile config "force-systemd-flag-623081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0914 23:13:00.195633 2984356 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 23:13:00.248032 2984356 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0914 23:13:00.248146 2984356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 23:13:00.393889 2984356 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-14 23:13:00.383406976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0914 23:13:00.393993 2984356 docker.go:294] overlay module found
	I0914 23:13:00.395897 2984356 out.go:177] * Using the docker driver based on user configuration
	I0914 23:13:00.397526 2984356 start.go:298] selected driver: docker
	I0914 23:13:00.397542 2984356 start.go:902] validating driver "docker" against <nil>
	I0914 23:13:00.397555 2984356 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 23:13:00.400102 2984356 out.go:177] 
	W0914 23:13:00.403194 2984356 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0914 23:13:00.405069 2984356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-811741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-811741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811741"

                                                
                                                
----------------------- debugLogs end: false-811741 [took: 4.788825572s] --------------------------------
helpers_test.go:175: Cleaning up "false-811741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-811741
--- PASS: TestNetworkPlugins/group/false (5.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-856998 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0914 23:14:45.973639 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 23:14:49.832139 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 23:15:35.167442 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-856998 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m15.322287035s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-856998 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4583b84-e371-4761-b2f6-43f072d07c85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 23:16:42.928291 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c4583b84-e371-4761-b2f6-43f072d07c85] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.031649603s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-856998 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-856998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-856998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.181961058s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-856998 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-856998 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-856998 --alsologtostderr -v=3: (12.095555546s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856998 -n old-k8s-version-856998
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856998 -n old-k8s-version-856998: exit status 7 (75.396733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-856998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (442.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-856998 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-856998 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m22.435866485s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-856998 -n old-k8s-version-856998
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (442.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-759632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-759632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m11.482542568s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-759632 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b82b7d1-9bae-4a6c-a696-b62ff1f00c15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b82b7d1-9bae-4a6c-a696-b62ff1f00c15] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.028219897s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-759632 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-759632 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-759632 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-759632 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-759632 --alsologtostderr -v=3: (12.079510592s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-759632 -n no-preload-759632
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-759632 -n no-preload-759632: exit status 7 (70.209281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-759632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-759632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:19:49.832828 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 23:20:35.166688 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 23:21:42.927287 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 23:23:38.211940 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-759632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m49.024082298s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-759632 -n no-preload-759632
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-t245r" [445e2d0b-443b-4015-b8c6-04a61402fd18] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.045500563s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-t245r" [445e2d0b-443b-4015-b8c6-04a61402fd18] Running
E0914 23:24:32.877685 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008781485s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-856998 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-856998 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-856998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856998 -n old-k8s-version-856998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856998 -n old-k8s-version-856998: exit status 2 (405.304473ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856998 -n old-k8s-version-856998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856998 -n old-k8s-version-856998: exit status 2 (361.679842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-856998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-856998 -n old-k8s-version-856998
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-856998 -n old-k8s-version-856998
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-365914 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:24:49.832870 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-365914 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (55.475615361s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5jpzj" [57628beb-93fa-41b0-b4e8-6727d90ed954] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5jpzj" [57628beb-93fa-41b0-b4e8-6727d90ed954] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.025628866s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5jpzj" [57628beb-93fa-41b0-b4e8-6727d90ed954] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00999336s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-759632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-759632 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-759632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-759632 -n no-preload-759632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-759632 -n no-preload-759632: exit status 2 (359.290846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-759632 -n no-preload-759632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-759632 -n no-preload-759632: exit status 2 (374.534858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-759632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-759632 --alsologtostderr -v=1: (1.126344092s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-759632 -n no-preload-759632
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-759632 -n no-preload-759632
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-489074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:25:35.166851 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-489074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (56.424780426s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-365914 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22ffa163-7b4b-4847-aa3b-426e1fcb673b] Pending
helpers_test.go:344: "busybox" [22ffa163-7b4b-4847-aa3b-426e1fcb673b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22ffa163-7b4b-4847-aa3b-426e1fcb673b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.03526864s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-365914 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-365914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-365914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.476582971s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-365914 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-365914 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-365914 --alsologtostderr -v=3: (12.362466968s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365914 -n embed-certs-365914
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365914 -n embed-certs-365914: exit status 7 (74.452652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-365914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (622.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-365914 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-365914 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (10m22.27813436s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365914 -n embed-certs-365914
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (622.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-489074 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d100836c-fdce-4c67-a3c5-000755c82daa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d100836c-fdce-4c67-a3c5-000755c82daa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.025563581s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-489074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-489074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-489074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.538403755s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-489074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-489074 --alsologtostderr -v=3
E0914 23:26:40.023764 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.029100 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.039376 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.059709 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.099979 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.180276 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.340672 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:40.661231 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:41.301674 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:42.582685 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:42.928269 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-489074 --alsologtostderr -v=3: (12.35685348s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074: exit status 7 (77.940281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-489074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (629.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-489074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:26:45.142903 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:26:50.263769 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:27:00.504777 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:27:20.985209 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:28:01.945413 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:28:37.824408 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:37.829705 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:37.839927 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:37.860252 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:37.900624 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:37.981113 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:38.141465 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:38.461601 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:39.101985 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:40.382925 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:42.943102 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:48.064163 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:28:58.304646 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:29:18.784845 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:29:23.866100 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:29:49.832674 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 23:29:59.745121 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:30:35.166670 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
E0914 23:31:21.665337 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:31:25.974497 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 23:31:40.022967 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:31:42.928222 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
E0914 23:32:07.706757 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
E0914 23:33:37.823977 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:34:05.505474 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
E0914 23:34:49.832032 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
E0914 23:35:35.167131 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-489074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (10m28.944455469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (629.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bzjbv" [a55c9c72-8cd6-40d4-b265-7952e2d473cb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023710801s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bzjbv" [a55c9c72-8cd6-40d4-b265-7952e2d473cb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010297458s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-365914 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-365914 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-365914 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365914 -n embed-certs-365914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365914 -n embed-certs-365914: exit status 2 (455.397059ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365914 -n embed-certs-365914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365914 -n embed-certs-365914: exit status 2 (453.438513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-365914 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365914 -n embed-certs-365914
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365914 -n embed-certs-365914
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-145602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0914 23:36:42.927376 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/functional-127648/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-145602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (46.93135509s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jvl4g" [15cae91c-afc2-4bc1-9a52-581e19336556] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.030390642s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jvl4g" [15cae91c-afc2-4bc1-9a52-581e19336556] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009941056s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-489074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-489074 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-489074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074: exit status 2 (385.761071ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074: exit status 2 (385.383924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-489074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-489074 -n default-k8s-diff-port-489074
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-145602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-145602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.01330799s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (59.862952681s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-145602 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-145602 --alsologtostderr -v=3: (1.372459623s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-145602 -n newest-cni-145602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-145602 -n newest-cni-145602: exit status 7 (93.24844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-145602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-145602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-145602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (36.14051209s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-145602 -n newest-cni-145602
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-145602 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-145602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-145602 -n newest-cni-145602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-145602 -n newest-cni-145602: exit status 2 (378.220245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-145602 -n newest-cni-145602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-145602 -n newest-cni-145602: exit status 2 (400.457013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-145602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-145602 -n newest-cni-145602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-145602 -n newest-cni-145602
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.60s)
E0914 23:43:41.950092 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/auto-811741/client.crt: no such file or directory
E0914 23:43:52.191092 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/auto-811741/client.crt: no such file or directory
E0914 23:44:03.564225 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.236374669s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nbqvs" [fdf5c574-f853-4433-93a4-dafffcb35d15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:38:37.823751 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/no-preload-759632/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nbqvs" [fdf5c574-f853-4433-93a4-dafffcb35d15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.011077115s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.657019353s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-24jf9" [bfc332dd-da61-4f7f-9650-39c9981d8570] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.046380895s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-96dkp" [cc2c1c6a-89f9-47bc-b0bb-9311d659b503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-96dkp" [cc2c1c6a-89f9-47bc-b0bb-9311d659b503] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.010550273s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0914 23:40:18.212107 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.16430182s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sqn7l" [b4b9f391-df91-460a-ab1b-73720c876043] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.07125712s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fwzk4" [9e88904e-f523-4f03-bddf-396753191448] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:40:35.166734 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/ingress-addon-legacy-438037/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fwzk4" [9e88904e-f523-4f03-bddf-396753191448] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.023980782s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (53.885278849s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dzp2p" [899814be-bfd9-4988-be1d-f731b3737d29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:41:12.878458 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/addons-909789/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dzp2p" [899814be-bfd9-4988-be1d-f731b3737d29] Running
E0914 23:41:19.719803 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
E0914 23:41:19.725046 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
E0914 23:41:19.735268 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
E0914 23:41:19.755500 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
E0914 23:41:19.796626 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.013758999s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-811741 exec deployment/netcat -- nslookup kubernetes.default
E0914 23:41:19.877498 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
E0914 23:41:20.038237 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0914 23:41:20.358803 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m9.5268523s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-811741 "pgrep -a kubelet"
E0914 23:42:00.682862 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qdhvg" [4c1bff67-ff56-414d-9df0-75747b8b4676] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qdhvg" [4c1bff67-ff56-414d-9df0-75747b8b4676] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.012727828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0914 23:42:41.643992 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/default-k8s-diff-port-489074/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-811741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.648673108s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f52rj" [95fd6e93-5db8-4958-a63e-c5201979869f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.047895937s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-55s2l" [a7c6a9cd-cc74-4c23-a6ff-73564556f92f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:43:03.067071 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/old-k8s-version-856998/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-55s2l" [a7c6a9cd-cc74-4c23-a6ff-73564556f92f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.013607824s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-811741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-811741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6xfr9" [99289a15-6a2b-4844-b26f-1af7a831ac48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 23:44:11.850671 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:11.855918 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:11.866159 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:11.886417 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:11.926689 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:12.006953 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:12.167395 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6xfr9" [99289a15-6a2b-4844-b26f-1af7a831ac48] Running
E0914 23:44:12.487734 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:12.672084 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/auto-811741/client.crt: no such file or directory
E0914 23:44:13.128848 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:14.409349 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
E0914 23:44:16.969816 2846109 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/kindnet-811741/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.020254927s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-811741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-811741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-646598 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-646598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-646598
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-187111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-187111
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-811741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-811741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811741"

                                                
                                                
----------------------- debugLogs end: kubenet-811741 [took: 5.604235614s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-811741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-811741
--- SKIP: TestNetworkPlugins/group/kubenet (5.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-811741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17243-2840729/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 14 Sep 2023 23:13:04 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-623081
contexts:
- context:
cluster: force-systemd-flag-623081
extensions:
- extension:
last-update: Thu, 14 Sep 2023 23:13:04 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-flag-623081
name: force-systemd-flag-623081
current-context: force-systemd-flag-623081
kind: Config
preferences: {}
users:
- name: force-systemd-flag-623081
user:
client-certificate: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/force-systemd-flag-623081/client.crt
client-key: /home/jenkins/minikube-integration/17243-2840729/.minikube/profiles/force-systemd-flag-623081/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811741"

                                                
                                                
----------------------- debugLogs end: cilium-811741 [took: 4.930141767s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-811741
--- SKIP: TestNetworkPlugins/group/cilium (5.17s)

                                                
                                    
Copied to clipboard