Test Report: Docker_Linux_crio_arm64 17116

                    
                      df10b09dbbeac24ae88706f418e89fa15ebc408d:2023-09-06:30896
                    
                

Test fail (7/298)

Order failed test Duration
25 TestAddons/parallel/Ingress 169.99
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.13
204 TestMultiNode/serial/PingHostFrom2Pods 4.46
225 TestRunningBinaryUpgrade 74.76
228 TestMissingContainerUpgrade 139.26
241 TestPause/serial/SecondStartNoReconfiguration 73.38
243 TestStoppedBinaryUpgrade/Upgrade 105.53
x
+
TestAddons/parallel/Ingress (169.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-342654 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-342654 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-342654 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d134227d-7bbc-45bf-bb49-aacdce384a95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d134227d-7bbc-45bf-bb49-aacdce384a95] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013719702s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-342654 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.595170372s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-342654 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.05645004s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-342654 addons disable ingress-dns --alsologtostderr -v=1: (1.058069091s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-342654 addons disable ingress --alsologtostderr -v=1: (7.797687362s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-342654
helpers_test.go:235: (dbg) docker inspect addons-342654:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e",
	        "Created": "2023-09-06T19:57:29.125963752Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 658860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T19:57:29.451479651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e/hostname",
	        "HostsPath": "/var/lib/docker/containers/65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e/hosts",
	        "LogPath": "/var/lib/docker/containers/65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e/65738520cddf576a7c70ae9f8fedd370eb956d9a14a159770eb10f9d1f33832e-json.log",
	        "Name": "/addons-342654",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-342654:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-342654",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ba47685e874bfbe8e8040a9f17a11849df990eb604a88421e07a285a5ab3241-init/diff:/var/lib/docker/overlay2/ba2e4d17dafea75bb4f24482e38d11907530383cc2bd79f5b12dd92aeb991448/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ba47685e874bfbe8e8040a9f17a11849df990eb604a88421e07a285a5ab3241/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ba47685e874bfbe8e8040a9f17a11849df990eb604a88421e07a285a5ab3241/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ba47685e874bfbe8e8040a9f17a11849df990eb604a88421e07a285a5ab3241/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-342654",
	                "Source": "/var/lib/docker/volumes/addons-342654/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-342654",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-342654",
	                "name.minikube.sigs.k8s.io": "addons-342654",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c60ee5292f971be4177fce7b09e6667a6c8e29f3b20c7bdd34e4896e0aff38c5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c60ee5292f97",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-342654": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "65738520cddf",
	                        "addons-342654"
	                    ],
	                    "NetworkID": "de0931cb7eed402ab26c43da90da258b253880c9dc3bad11e26d1b5c2543a653",
	                    "EndpointID": "1a80b4998fa4bba3d1ad58b446d872ba60887547d6c9393b0e519c9fef9f437b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-342654 -n addons-342654
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-342654 logs -n 25: (1.671494662s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-363440   | jenkins | v1.31.2 | 06 Sep 23 19:56 UTC |                     |
	|         | -p download-only-363440        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-363440   | jenkins | v1.31.2 | 06 Sep 23 19:56 UTC |                     |
	|         | -p download-only-363440        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:57 UTC |
	| delete  | -p download-only-363440        | download-only-363440   | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:57 UTC |
	| delete  | -p download-only-363440        | download-only-363440   | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:57 UTC |
	| start   | --download-only -p             | download-docker-337090 | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC |                     |
	|         | download-docker-337090         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-337090      | download-docker-337090 | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:57 UTC |
	| start   | --download-only -p             | binary-mirror-500664   | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC |                     |
	|         | binary-mirror-500664           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44129         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-500664        | binary-mirror-500664   | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:57 UTC |
	| start   | -p addons-342654               | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 19:57 UTC | 06 Sep 23 19:59 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 19:59 UTC | 06 Sep 23 19:59 UTC |
	|         | addons-342654                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 19:59 UTC | 06 Sep 23 19:59 UTC |
	|         | -p addons-342654               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-342654 ip               | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC | 06 Sep 23 20:00 UTC |
	| addons  | addons-342654 addons disable   | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC | 06 Sep 23 20:00 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-342654 addons           | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC | 06 Sep 23 20:00 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC | 06 Sep 23 20:00 UTC |
	|         | addons-342654                  |                        |         |         |                     |                     |
	| ssh     | addons-342654 ssh curl -s      | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-342654 addons           | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:00 UTC | 06 Sep 23 20:01 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-342654 addons           | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:01 UTC | 06 Sep 23 20:01 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-342654 ip               | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:02 UTC | 06 Sep 23 20:02 UTC |
	| addons  | addons-342654 addons disable   | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:03 UTC | 06 Sep 23 20:03 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-342654 addons disable   | addons-342654          | jenkins | v1.31.2 | 06 Sep 23 20:03 UTC | 06 Sep 23 20:03 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 19:57:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:57:05.209110  658394 out.go:296] Setting OutFile to fd 1 ...
	I0906 19:57:05.209335  658394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:57:05.209359  658394 out.go:309] Setting ErrFile to fd 2...
	I0906 19:57:05.209379  658394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:57:05.209689  658394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 19:57:05.210203  658394 out.go:303] Setting JSON to false
	I0906 19:57:05.211216  658394 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9380,"bootTime":1694020846,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 19:57:05.211311  658394 start.go:138] virtualization:  
	I0906 19:57:05.214184  658394 out.go:177] * [addons-342654] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 19:57:05.216529  658394 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 19:57:05.218539  658394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:57:05.216666  658394 notify.go:220] Checking for updates...
	I0906 19:57:05.220805  658394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 19:57:05.222710  658394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 19:57:05.224581  658394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 19:57:05.226121  658394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:57:05.227902  658394 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 19:57:05.252596  658394 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 19:57:05.252726  658394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:57:05.340818  658394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-06 19:57:05.330763848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:57:05.340955  658394 docker.go:294] overlay module found
	I0906 19:57:05.342839  658394 out.go:177] * Using the docker driver based on user configuration
	I0906 19:57:05.344791  658394 start.go:298] selected driver: docker
	I0906 19:57:05.344811  658394 start.go:902] validating driver "docker" against <nil>
	I0906 19:57:05.344827  658394 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:57:05.345448  658394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:57:05.414166  658394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-06 19:57:05.404733979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:57:05.414330  658394 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 19:57:05.414650  658394 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:57:05.416826  658394 out.go:177] * Using Docker driver with root privileges
	I0906 19:57:05.418920  658394 cni.go:84] Creating CNI manager for ""
	I0906 19:57:05.418939  658394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 19:57:05.418956  658394 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 19:57:05.418971  658394 start_flags.go:321] config:
	{Name:addons-342654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-342654 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 19:57:05.421412  658394 out.go:177] * Starting control plane node addons-342654 in cluster addons-342654
	I0906 19:57:05.423280  658394 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 19:57:05.425082  658394 out.go:177] * Pulling base image ...
	I0906 19:57:05.426752  658394 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 19:57:05.426808  658394 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 19:57:05.426821  658394 cache.go:57] Caching tarball of preloaded images
	I0906 19:57:05.426844  658394 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 19:57:05.426894  658394 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 19:57:05.426904  658394 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 19:57:05.427280  658394 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/config.json ...
	I0906 19:57:05.427311  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/config.json: {Name:mk1ac0232d159da12bb2c3747fc68c67caea393b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:05.444066  658394 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0906 19:57:05.444223  658394 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0906 19:57:05.444244  658394 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory, skipping pull
	I0906 19:57:05.444249  658394 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in cache, skipping pull
	I0906 19:57:05.444256  658394 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0906 19:57:05.444261  658394 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad from local cache
	I0906 19:57:21.381636  658394 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad from cached tarball
	I0906 19:57:21.381678  658394 cache.go:195] Successfully downloaded all kic artifacts
	I0906 19:57:21.381708  658394 start.go:365] acquiring machines lock for addons-342654: {Name:mk51ed098e141245ff8607dacb778466a8d4841e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:57:21.381831  658394 start.go:369] acquired machines lock for "addons-342654" in 100.956µs
	I0906 19:57:21.381859  658394 start.go:93] Provisioning new machine with config: &{Name:addons-342654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-342654 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:57:21.381952  658394 start.go:125] createHost starting for "" (driver="docker")
	I0906 19:57:21.383925  658394 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0906 19:57:21.384159  658394 start.go:159] libmachine.API.Create for "addons-342654" (driver="docker")
	I0906 19:57:21.384196  658394 client.go:168] LocalClient.Create starting
	I0906 19:57:21.384332  658394 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem
	I0906 19:57:21.780820  658394 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem
	I0906 19:57:22.578441  658394 cli_runner.go:164] Run: docker network inspect addons-342654 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 19:57:22.595649  658394 cli_runner.go:211] docker network inspect addons-342654 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 19:57:22.595731  658394 network_create.go:281] running [docker network inspect addons-342654] to gather additional debugging logs...
	I0906 19:57:22.595751  658394 cli_runner.go:164] Run: docker network inspect addons-342654
	W0906 19:57:22.612300  658394 cli_runner.go:211] docker network inspect addons-342654 returned with exit code 1
	I0906 19:57:22.612336  658394 network_create.go:284] error running [docker network inspect addons-342654]: docker network inspect addons-342654: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-342654 not found
	I0906 19:57:22.612349  658394 network_create.go:286] output of [docker network inspect addons-342654]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-342654 not found
	
	** /stderr **
	I0906 19:57:22.612420  658394 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 19:57:22.629321  658394 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40010dc820}
	I0906 19:57:22.629359  658394 network_create.go:123] attempt to create docker network addons-342654 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 19:57:22.629416  658394 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-342654 addons-342654
	I0906 19:57:22.704582  658394 network_create.go:107] docker network addons-342654 192.168.49.0/24 created
	I0906 19:57:22.704617  658394 kic.go:117] calculated static IP "192.168.49.2" for the "addons-342654" container
	I0906 19:57:22.704689  658394 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 19:57:22.721912  658394 cli_runner.go:164] Run: docker volume create addons-342654 --label name.minikube.sigs.k8s.io=addons-342654 --label created_by.minikube.sigs.k8s.io=true
	I0906 19:57:22.740425  658394 oci.go:103] Successfully created a docker volume addons-342654
	I0906 19:57:22.740521  658394 cli_runner.go:164] Run: docker run --rm --name addons-342654-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-342654 --entrypoint /usr/bin/test -v addons-342654:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0906 19:57:24.879176  658394 cli_runner.go:217] Completed: docker run --rm --name addons-342654-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-342654 --entrypoint /usr/bin/test -v addons-342654:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib: (2.138611881s)
	I0906 19:57:24.879207  658394 oci.go:107] Successfully prepared a docker volume addons-342654
	I0906 19:57:24.879232  658394 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 19:57:24.879256  658394 kic.go:190] Starting extracting preloaded images to volume ...
	I0906 19:57:24.879348  658394 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-342654:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 19:57:29.043529  658394 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-342654:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.164126815s)
	I0906 19:57:29.043560  658394 kic.go:199] duration metric: took 4.164300 seconds to extract preloaded images to volume
	W0906 19:57:29.043704  658394 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 19:57:29.043814  658394 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 19:57:29.108871  658394 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-342654 --name addons-342654 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-342654 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-342654 --network addons-342654 --ip 192.168.49.2 --volume addons-342654:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0906 19:57:29.459709  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Running}}
	I0906 19:57:29.480951  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:57:29.507152  658394 cli_runner.go:164] Run: docker exec addons-342654 stat /var/lib/dpkg/alternatives/iptables
	I0906 19:57:29.594672  658394 oci.go:144] the created container "addons-342654" has a running status.
	I0906 19:57:29.594698  658394 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa...
	I0906 19:57:29.946185  658394 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 19:57:29.979069  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:57:30.003799  658394 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 19:57:30.003825  658394 kic_runner.go:114] Args: [docker exec --privileged addons-342654 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 19:57:30.101134  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:57:30.136959  658394 machine.go:88] provisioning docker machine ...
	I0906 19:57:30.137000  658394 ubuntu.go:169] provisioning hostname "addons-342654"
	I0906 19:57:30.137076  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:30.168309  658394 main.go:141] libmachine: Using SSH client type: native
	I0906 19:57:30.168787  658394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33417 <nil> <nil>}
	I0906 19:57:30.168812  658394 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-342654 && echo "addons-342654" | sudo tee /etc/hostname
	I0906 19:57:30.169557  658394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0906 19:57:33.323114  658394 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-342654
	
	I0906 19:57:33.323202  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:33.343703  658394 main.go:141] libmachine: Using SSH client type: native
	I0906 19:57:33.344202  658394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33417 <nil> <nil>}
	I0906 19:57:33.344226  658394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-342654' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-342654/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-342654' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:57:33.483379  658394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:57:33.483406  658394 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 19:57:33.483426  658394 ubuntu.go:177] setting up certificates
	I0906 19:57:33.483456  658394 provision.go:83] configureAuth start
	I0906 19:57:33.483521  658394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-342654
	I0906 19:57:33.504681  658394 provision.go:138] copyHostCerts
	I0906 19:57:33.504768  658394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 19:57:33.504887  658394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 19:57:33.504950  658394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 19:57:33.504995  658394 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.addons-342654 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-342654]
	I0906 19:57:33.714019  658394 provision.go:172] copyRemoteCerts
	I0906 19:57:33.714103  658394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:57:33.714146  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:33.732036  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:57:33.834460  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 19:57:33.863662  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 19:57:33.892423  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:57:33.921087  658394 provision.go:86] duration metric: configureAuth took 437.614816ms
	I0906 19:57:33.921110  658394 ubuntu.go:193] setting minikube options for container-runtime
	I0906 19:57:33.921305  658394 config.go:182] Loaded profile config "addons-342654": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 19:57:33.921403  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:33.939393  658394 main.go:141] libmachine: Using SSH client type: native
	I0906 19:57:33.939862  658394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33417 <nil> <nil>}
	I0906 19:57:33.939886  658394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:57:34.207758  658394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:57:34.207778  658394 machine.go:91] provisioned docker machine in 4.070792178s
	I0906 19:57:34.207787  658394 client.go:171] LocalClient.Create took 12.823583105s
	I0906 19:57:34.207803  658394 start.go:167] duration metric: libmachine.API.Create for "addons-342654" took 12.823645596s
	I0906 19:57:34.207809  658394 start.go:300] post-start starting for "addons-342654" (driver="docker")
	I0906 19:57:34.207819  658394 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:57:34.207898  658394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:57:34.207966  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:34.228007  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:57:34.329134  658394 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:57:34.333432  658394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 19:57:34.333471  658394 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 19:57:34.333482  658394 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 19:57:34.333489  658394 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 19:57:34.333500  658394 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 19:57:34.333577  658394 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 19:57:34.333603  658394 start.go:303] post-start completed in 125.788039ms
	I0906 19:57:34.333966  658394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-342654
	I0906 19:57:34.352234  658394 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/config.json ...
	I0906 19:57:34.352525  658394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:57:34.352579  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:34.370214  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:57:34.468207  658394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 19:57:34.474013  658394 start.go:128] duration metric: createHost completed in 13.092048317s
	I0906 19:57:34.474040  658394 start.go:83] releasing machines lock for "addons-342654", held for 13.09219674s
	I0906 19:57:34.474130  658394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-342654
	I0906 19:57:34.491465  658394 ssh_runner.go:195] Run: cat /version.json
	I0906 19:57:34.491524  658394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:57:34.491589  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:34.491528  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:57:34.523962  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:57:34.524239  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:57:34.757342  658394 ssh_runner.go:195] Run: systemctl --version
	I0906 19:57:34.763237  658394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:57:34.913194  658394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 19:57:34.919209  658394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:57:34.944457  658394 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 19:57:34.944545  658394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:57:34.989137  658394 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 19:57:34.989160  658394 start.go:466] detecting cgroup driver to use...
	I0906 19:57:34.989192  658394 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 19:57:34.989241  658394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:57:35.010347  658394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:57:35.025862  658394 docker.go:196] disabling cri-docker service (if available) ...
	I0906 19:57:35.025939  658394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:57:35.047809  658394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:57:35.064608  658394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:57:35.171238  658394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:57:35.277297  658394 docker.go:212] disabling docker service ...
	I0906 19:57:35.277361  658394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:57:35.299945  658394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:57:35.314868  658394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:57:35.415435  658394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:57:35.521159  658394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:57:35.535127  658394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:57:35.555410  658394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 19:57:35.555476  658394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:57:35.568145  658394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:57:35.568213  658394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:57:35.581752  658394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:57:35.594459  658394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:57:35.606593  658394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:57:35.618369  658394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:57:35.628919  658394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:57:35.639718  658394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:57:35.729860  658394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:57:35.847919  658394 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:57:35.848076  658394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:57:35.853058  658394 start.go:534] Will wait 60s for crictl version
	I0906 19:57:35.853124  658394 ssh_runner.go:195] Run: which crictl
	I0906 19:57:35.857519  658394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:57:35.901622  658394 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 19:57:35.901780  658394 ssh_runner.go:195] Run: crio --version
	I0906 19:57:35.945862  658394 ssh_runner.go:195] Run: crio --version
	I0906 19:57:35.992328  658394 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 19:57:35.994177  658394 cli_runner.go:164] Run: docker network inspect addons-342654 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 19:57:36.028016  658394 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0906 19:57:36.033455  658394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:57:36.051287  658394 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 19:57:36.051356  658394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:57:36.134091  658394 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 19:57:36.134117  658394 crio.go:415] Images already preloaded, skipping extraction
	I0906 19:57:36.134179  658394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:57:36.177480  658394 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 19:57:36.177499  658394 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:57:36.177570  658394 ssh_runner.go:195] Run: crio config
	I0906 19:57:36.232392  658394 cni.go:84] Creating CNI manager for ""
	I0906 19:57:36.232413  658394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 19:57:36.232464  658394 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 19:57:36.232489  658394 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-342654 NodeName:addons-342654 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:57:36.232654  658394 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-342654"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:57:36.232743  658394 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-342654 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-342654 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 19:57:36.232815  658394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 19:57:36.243077  658394 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:57:36.243149  658394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:57:36.253277  658394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0906 19:57:36.273811  658394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:57:36.294300  658394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0906 19:57:36.314597  658394 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0906 19:57:36.318814  658394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:57:36.332185  658394 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654 for IP: 192.168.49.2
	I0906 19:57:36.332264  658394 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:36.332861  658394 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 19:57:37.497025  658394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt ...
	I0906 19:57:37.497068  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt: {Name:mk7438a2e51e4635a50e1467d53956d63acd9f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:37.497322  658394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key ...
	I0906 19:57:37.497336  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key: {Name:mk60f3c58584e0a88e1f964980207da1ff2c95e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:37.497454  658394 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 19:57:38.061396  658394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt ...
	I0906 19:57:38.061440  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt: {Name:mkace5e02873aad50cf0cb54a2a2fe271b6238c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:38.061724  658394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key ...
	I0906 19:57:38.061740  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key: {Name:mk103c078040fb0fb59476b41c6fa842acac5b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:38.061946  658394 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.key
	I0906 19:57:38.061964  658394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt with IP's: []
	I0906 19:57:39.198594  658394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt ...
	I0906 19:57:39.198631  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: {Name:mk570735150aa8935df525a6a230afc54fa71621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:39.199408  658394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.key ...
	I0906 19:57:39.199424  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.key: {Name:mk9bffc8d976e1bd352f50efb379dacf0214663a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:39.199994  658394 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key.dd3b5fb2
	I0906 19:57:39.200016  658394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 19:57:41.045651  658394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt.dd3b5fb2 ...
	I0906 19:57:41.045684  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt.dd3b5fb2: {Name:mk1753a4bd950fb60a5baff1becbbd0b5fb0c5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:41.046490  658394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key.dd3b5fb2 ...
	I0906 19:57:41.046509  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key.dd3b5fb2: {Name:mk39d3c1805c6cb58dd3866575eb170134681bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:41.046996  658394 certs.go:337] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt
	I0906 19:57:41.047074  658394 certs.go:341] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key
	I0906 19:57:41.047123  658394 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.key
	I0906 19:57:41.047142  658394 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.crt with IP's: []
	I0906 19:57:41.584687  658394 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.crt ...
	I0906 19:57:41.584720  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.crt: {Name:mk039cc23c61c1ac059320b8ce4b608448aa9dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:41.585389  658394 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.key ...
	I0906 19:57:41.585408  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.key: {Name:mk9957af06a16d1820d0aa5e2345d5b153f00c52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:57:41.585988  658394 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:57:41.586036  658394 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 19:57:41.586086  658394 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:57:41.586116  658394 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 19:57:41.586703  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 19:57:41.616124  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:57:41.645236  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:57:41.674456  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:57:41.703806  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:57:41.733522  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 19:57:41.761784  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:57:41.790858  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:57:41.820042  658394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:57:41.849919  658394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:57:41.871658  658394 ssh_runner.go:195] Run: openssl version
	I0906 19:57:41.879092  658394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:57:41.890683  658394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:57:41.895355  658394 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:57:41.895465  658394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:57:41.904156  658394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:57:41.915759  658394 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 19:57:41.920250  658394 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 19:57:41.920336  658394 kubeadm.go:404] StartCluster: {Name:addons-342654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-342654 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 19:57:41.920445  658394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:57:41.920500  658394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:57:41.962619  658394 cri.go:89] found id: ""
	I0906 19:57:41.962733  658394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 19:57:41.973405  658394 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 19:57:41.983984  658394 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0906 19:57:41.984087  658394 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:57:41.994745  658394 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:57:41.994787  658394 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 19:57:42.112451  658394 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0906 19:57:42.236435  658394 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:57:59.700809  658394 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 19:57:59.700892  658394 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 19:57:59.700995  658394 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0906 19:57:59.701061  658394 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0906 19:57:59.701102  658394 kubeadm.go:322] OS: Linux
	I0906 19:57:59.701148  658394 kubeadm.go:322] CGROUPS_CPU: enabled
	I0906 19:57:59.701223  658394 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0906 19:57:59.701296  658394 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0906 19:57:59.701362  658394 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0906 19:57:59.701410  658394 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0906 19:57:59.701481  658394 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0906 19:57:59.701557  658394 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0906 19:57:59.701632  658394 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0906 19:57:59.701686  658394 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0906 19:57:59.701767  658394 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:57:59.701870  658394 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:57:59.701970  658394 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 19:57:59.702038  658394 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:57:59.703787  658394 out.go:204]   - Generating certificates and keys ...
	I0906 19:57:59.703872  658394 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 19:57:59.703939  658394 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 19:57:59.704010  658394 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 19:57:59.704068  658394 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 19:57:59.704131  658394 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 19:57:59.704181  658394 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 19:57:59.704236  658394 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 19:57:59.704348  658394 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-342654 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 19:57:59.704402  658394 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 19:57:59.704512  658394 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-342654 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 19:57:59.704577  658394 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 19:57:59.704640  658394 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 19:57:59.704685  658394 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 19:57:59.704741  658394 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:57:59.704792  658394 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:57:59.704842  658394 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:57:59.704908  658394 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:57:59.704965  658394 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:57:59.705044  658394 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:57:59.705109  658394 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:57:59.706999  658394 out.go:204]   - Booting up control plane ...
	I0906 19:57:59.707112  658394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:57:59.707211  658394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:57:59.707291  658394 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:57:59.707414  658394 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:57:59.707502  658394 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:57:59.707545  658394 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 19:57:59.707705  658394 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 19:57:59.707785  658394 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503109 seconds
	I0906 19:57:59.707895  658394 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 19:57:59.708023  658394 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 19:57:59.708085  658394 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 19:57:59.708269  658394 kubeadm.go:322] [mark-control-plane] Marking the node addons-342654 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 19:57:59.708329  658394 kubeadm.go:322] [bootstrap-token] Using token: 0m5azn.sxxtf0w93njj2jhu
	I0906 19:57:59.710317  658394 out.go:204]   - Configuring RBAC rules ...
	I0906 19:57:59.710438  658394 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 19:57:59.710534  658394 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 19:57:59.710677  658394 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 19:57:59.710821  658394 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 19:57:59.710942  658394 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 19:57:59.711049  658394 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 19:57:59.711168  658394 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 19:57:59.711216  658394 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 19:57:59.711268  658394 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 19:57:59.711276  658394 kubeadm.go:322] 
	I0906 19:57:59.711337  658394 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 19:57:59.711345  658394 kubeadm.go:322] 
	I0906 19:57:59.711422  658394 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 19:57:59.711430  658394 kubeadm.go:322] 
	I0906 19:57:59.711456  658394 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 19:57:59.711526  658394 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 19:57:59.711581  658394 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 19:57:59.711589  658394 kubeadm.go:322] 
	I0906 19:57:59.711643  658394 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 19:57:59.711651  658394 kubeadm.go:322] 
	I0906 19:57:59.711700  658394 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 19:57:59.711707  658394 kubeadm.go:322] 
	I0906 19:57:59.711760  658394 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 19:57:59.711839  658394 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 19:57:59.711912  658394 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 19:57:59.711920  658394 kubeadm.go:322] 
	I0906 19:57:59.712008  658394 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 19:57:59.712090  658394 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 19:57:59.712098  658394 kubeadm.go:322] 
	I0906 19:57:59.712182  658394 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0m5azn.sxxtf0w93njj2jhu \
	I0906 19:57:59.712289  658394 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a \
	I0906 19:57:59.712314  658394 kubeadm.go:322] 	--control-plane 
	I0906 19:57:59.712322  658394 kubeadm.go:322] 
	I0906 19:57:59.712407  658394 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 19:57:59.712432  658394 kubeadm.go:322] 
	I0906 19:57:59.712521  658394 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0m5azn.sxxtf0w93njj2jhu \
	I0906 19:57:59.712635  658394 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a 
	I0906 19:57:59.712648  658394 cni.go:84] Creating CNI manager for ""
	I0906 19:57:59.712656  658394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 19:57:59.715843  658394 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 19:57:59.717524  658394 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 19:57:59.723153  658394 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 19:57:59.723172  658394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 19:57:59.774947  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 19:58:00.693507  658394 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 19:58:00.693582  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:00.693704  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=addons-342654 minikube.k8s.io/updated_at=2023_09_06T19_58_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:00.811125  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:00.811177  658394 ops.go:34] apiserver oom_adj: -16
	I0906 19:58:00.938881  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:01.536498  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:02.036633  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:02.535971  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:03.035886  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:03.535975  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:04.036250  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:04.536514  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:05.036846  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:05.536362  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:06.036574  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:06.536549  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:07.036767  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:07.536382  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:08.035933  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:08.536168  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:09.036487  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:09.536649  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:10.036772  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:10.536401  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:11.035867  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:11.536527  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:12.035878  658394 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:58:12.202303  658394 kubeadm.go:1081] duration metric: took 11.508795576s to wait for elevateKubeSystemPrivileges.
	I0906 19:58:12.202328  658394 kubeadm.go:406] StartCluster complete in 30.281996974s
	I0906 19:58:12.202345  658394 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:58:12.202455  658394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 19:58:12.202814  658394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:58:12.204964  658394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 19:58:12.205224  658394 config.go:182] Loaded profile config "addons-342654": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 19:58:12.205255  658394 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0906 19:58:12.205322  658394 addons.go:69] Setting volumesnapshots=true in profile "addons-342654"
	I0906 19:58:12.205336  658394 addons.go:231] Setting addon volumesnapshots=true in "addons-342654"
	I0906 19:58:12.205369  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.205786  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.206456  658394 addons.go:69] Setting gcp-auth=true in profile "addons-342654"
	I0906 19:58:12.206478  658394 mustload.go:65] Loading cluster: addons-342654
	I0906 19:58:12.206644  658394 config.go:182] Loaded profile config "addons-342654": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 19:58:12.206863  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.206948  658394 addons.go:69] Setting cloud-spanner=true in profile "addons-342654"
	I0906 19:58:12.206961  658394 addons.go:231] Setting addon cloud-spanner=true in "addons-342654"
	I0906 19:58:12.206991  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.207326  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.207386  658394 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-342654"
	I0906 19:58:12.207410  658394 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-342654"
	I0906 19:58:12.207434  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.207768  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.207820  658394 addons.go:69] Setting default-storageclass=true in profile "addons-342654"
	I0906 19:58:12.207830  658394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-342654"
	I0906 19:58:12.208031  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.208093  658394 addons.go:69] Setting inspektor-gadget=true in profile "addons-342654"
	I0906 19:58:12.208102  658394 addons.go:231] Setting addon inspektor-gadget=true in "addons-342654"
	I0906 19:58:12.208125  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.208457  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.208508  658394 addons.go:69] Setting ingress=true in profile "addons-342654"
	I0906 19:58:12.208516  658394 addons.go:231] Setting addon ingress=true in "addons-342654"
	I0906 19:58:12.208543  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.208862  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.208918  658394 addons.go:69] Setting ingress-dns=true in profile "addons-342654"
	I0906 19:58:12.208926  658394 addons.go:231] Setting addon ingress-dns=true in "addons-342654"
	I0906 19:58:12.208950  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.209272  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.209325  658394 addons.go:69] Setting registry=true in profile "addons-342654"
	I0906 19:58:12.209333  658394 addons.go:231] Setting addon registry=true in "addons-342654"
	I0906 19:58:12.209355  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.209677  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.209727  658394 addons.go:69] Setting metrics-server=true in profile "addons-342654"
	I0906 19:58:12.209736  658394 addons.go:231] Setting addon metrics-server=true in "addons-342654"
	I0906 19:58:12.209756  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.212706  658394 addons.go:69] Setting storage-provisioner=true in profile "addons-342654"
	I0906 19:58:12.212734  658394 addons.go:231] Setting addon storage-provisioner=true in "addons-342654"
	I0906 19:58:12.212770  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.213198  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.222673  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.316506  658394 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0906 19:58:12.326532  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 19:58:12.328499  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 19:58:12.328521  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 19:58:12.328590  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.326823  658394 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 19:58:12.328676  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 19:58:12.328701  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.326890  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.412825  658394 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0906 19:58:12.418231  658394 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0906 19:58:12.418254  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0906 19:58:12.418318  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.449574  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 19:58:12.456919  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 19:58:12.464259  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 19:58:12.466308  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 19:58:12.470251  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 19:58:12.470260  658394 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0906 19:58:12.473935  658394 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 19:58:12.473959  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 19:58:12.472226  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 19:58:12.484124  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 19:58:12.474022  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.499550  658394 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 19:58:12.501579  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 19:58:12.501599  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 19:58:12.501674  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.508863  658394 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0906 19:58:12.512490  658394 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 19:58:12.519549  658394 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 19:58:12.521919  658394 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 19:58:12.521938  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0906 19:58:12.521999  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.526136  658394 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-342654" context rescaled to 1 replicas
	I0906 19:58:12.526177  658394 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:58:12.528676  658394 out.go:177] * Verifying Kubernetes components...
	I0906 19:58:12.530458  658394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:58:12.552500  658394 addons.go:231] Setting addon default-storageclass=true in "addons-342654"
	I0906 19:58:12.552548  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:12.553002  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:12.565753  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.590190  658394 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0906 19:58:12.591999  658394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:58:12.592209  658394 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 19:58:12.597931  658394 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 19:58:12.597939  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 19:58:12.597945  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 19:58:12.598011  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.598012  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.598547  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.603547  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.613869  658394 out.go:177]   - Using image docker.io/registry:2.8.1
	I0906 19:58:12.615928  658394 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0906 19:58:12.618426  658394 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 19:58:12.618445  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0906 19:58:12.618516  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.625448  658394 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 19:58:12.625464  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 19:58:12.625524  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:12.692954  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.698991  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.756486  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.760872  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.771836  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.773059  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.811273  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:12.980466  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 19:58:12.993849  658394 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 19:58:12.993873  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 19:58:12.995422  658394 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 19:58:12.995443  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 19:58:13.013093  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 19:58:13.061654  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 19:58:13.089559  658394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 19:58:13.090526  658394 node_ready.go:35] waiting up to 6m0s for node "addons-342654" to be "Ready" ...
	I0906 19:58:13.109851  658394 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 19:58:13.109876  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 19:58:13.129312  658394 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 19:58:13.129336  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 19:58:13.139647  658394 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 19:58:13.139674  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 19:58:13.152618  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 19:58:13.160482  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 19:58:13.167947  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 19:58:13.167971  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 19:58:13.176570  658394 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 19:58:13.176591  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 19:58:13.280180  658394 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 19:58:13.280204  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 19:58:13.305121  658394 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 19:58:13.305146  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 19:58:13.308318  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 19:58:13.308342  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 19:58:13.354205  658394 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 19:58:13.354232  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 19:58:13.361564  658394 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 19:58:13.361588  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 19:58:13.515643  658394 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 19:58:13.515671  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 19:58:13.519322  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 19:58:13.519347  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 19:58:13.519956  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 19:58:13.523716  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 19:58:13.523742  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 19:58:13.560324  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 19:58:13.658785  658394 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 19:58:13.658810  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 19:58:13.682916  658394 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 19:58:13.682940  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 19:58:13.711351  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 19:58:13.711379  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 19:58:13.864862  658394 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 19:58:13.864888  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 19:58:13.872401  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 19:58:13.875960  658394 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 19:58:13.875985  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 19:58:13.963860  658394 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 19:58:13.963886  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 19:58:13.996921  658394 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 19:58:13.996938  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0906 19:58:14.008532  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 19:58:14.008557  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 19:58:14.052812  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 19:58:14.090175  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 19:58:14.090203  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 19:58:14.209240  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 19:58:14.209271  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 19:58:14.332780  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 19:58:14.332804  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 19:58:14.605068  658394 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 19:58:14.605092  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 19:58:14.768700  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 19:58:15.172232  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:15.960548  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.980046713s)
	I0906 19:58:15.960667  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.94753396s)
	I0906 19:58:17.542844  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:17.763837  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.702145876s)
	I0906 19:58:17.763916  658394 addons.go:467] Verifying addon ingress=true in "addons-342654"
	I0906 19:58:17.766012  658394 out.go:177] * Verifying ingress addon...
	I0906 19:58:17.764112  658394 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.674525786s)
	I0906 19:58:17.764238  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.611592416s)
	I0906 19:58:17.764270  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.603765825s)
	I0906 19:58:17.764362  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.244382354s)
	I0906 19:58:17.764409  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.204051267s)
	I0906 19:58:17.764517  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.892091708s)
	I0906 19:58:17.764592  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.711743062s)
	I0906 19:58:17.766189  658394 start.go:907] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0906 19:58:17.766309  658394 addons.go:467] Verifying addon metrics-server=true in "addons-342654"
	I0906 19:58:17.766348  658394 addons.go:467] Verifying addon registry=true in "addons-342654"
	W0906 19:58:17.766403  658394 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 19:58:17.769810  658394 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 19:58:17.771437  658394 out.go:177] * Verifying registry addon...
	I0906 19:58:17.771683  658394 retry.go:31] will retry after 281.777708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 19:58:17.774200  658394 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 19:58:17.791485  658394 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 19:58:17.791556  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:17.793559  658394 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 19:58:17.793621  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:17.800970  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:17.802340  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:18.017070  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.248294454s)
	I0906 19:58:18.017107  658394 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-342654"
	I0906 19:58:18.019227  658394 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 19:58:18.021932  658394 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 19:58:18.032967  658394 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 19:58:18.033034  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:18.041037  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:18.056208  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 19:58:18.309860  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:18.312254  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:18.550336  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:18.809334  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:18.810735  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:19.050380  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:19.344893  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:19.345861  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:19.397646  658394 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 19:58:19.397747  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:19.443641  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:19.554951  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:19.557718  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:19.639562  658394 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 19:58:19.661384  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.605116791s)
	I0906 19:58:19.694653  658394 addons.go:231] Setting addon gcp-auth=true in "addons-342654"
	I0906 19:58:19.694720  658394 host.go:66] Checking if "addons-342654" exists ...
	I0906 19:58:19.695225  658394 cli_runner.go:164] Run: docker container inspect addons-342654 --format={{.State.Status}}
	I0906 19:58:19.731875  658394 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 19:58:19.731940  658394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-342654
	I0906 19:58:19.760751  658394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/addons-342654/id_rsa Username:docker}
	I0906 19:58:19.849967  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:19.851371  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:19.926082  658394 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 19:58:19.928018  658394 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0906 19:58:19.929678  658394 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 19:58:19.929734  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 19:58:19.985544  658394 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 19:58:19.985586  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 19:58:20.049611  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:20.054803  658394 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 19:58:20.054828  658394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0906 19:58:20.081119  658394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 19:58:20.311410  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:20.312808  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:20.546600  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:20.809894  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:20.817857  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:21.090694  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:21.285469  658394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.204305923s)
	I0906 19:58:21.289646  658394 addons.go:467] Verifying addon gcp-auth=true in "addons-342654"
	I0906 19:58:21.294065  658394 out.go:177] * Verifying gcp-auth addon...
	I0906 19:58:21.298245  658394 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 19:58:21.328741  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:21.329287  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:21.330123  658394 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 19:58:21.330173  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:21.336035  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:21.549040  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:21.809122  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:21.809792  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:21.840264  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:22.042902  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:22.047923  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:22.305919  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:22.309586  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:22.341144  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:22.547400  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:22.810717  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:22.811991  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:22.839640  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:23.046332  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:23.309557  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:23.310952  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:23.340771  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:23.545973  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:23.811595  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:23.812639  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:23.840552  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:24.048262  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:24.307887  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:24.309230  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:24.345066  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:24.543404  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:24.546353  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:24.809201  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:24.811091  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:24.840147  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:25.049658  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:25.308272  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:25.313822  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:25.340403  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:25.549156  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:25.808122  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:25.811549  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:25.840717  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:26.047151  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:26.308066  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:26.308721  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:26.340700  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:26.549042  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:26.809468  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:26.810732  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:26.841680  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:27.054859  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:27.057154  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:27.307089  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:27.309302  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:27.340914  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:27.547544  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:27.808912  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:27.809762  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:27.841204  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:28.049325  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:28.319626  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:28.321133  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:28.342194  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:28.547888  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:28.808315  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:28.809474  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:28.840286  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:29.045870  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:29.316439  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:29.316750  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:29.340618  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:29.542128  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:29.547110  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:29.808865  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:29.810657  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:29.847051  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:30.058126  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:30.311955  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:30.314649  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:30.340615  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:30.547346  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:30.806457  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:30.807240  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:30.840086  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:31.045129  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:31.306474  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:31.306699  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:31.340035  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:31.542300  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:31.545361  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:31.805811  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:31.807152  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:31.839758  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:32.045203  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:32.305428  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:32.306762  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:32.340505  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:32.546317  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:32.807032  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:32.808195  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:32.840368  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:33.046018  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:33.306314  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:33.306791  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:33.339956  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:33.545465  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:33.805234  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:33.807862  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:33.839986  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:34.041041  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:34.046377  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:34.306177  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:34.307069  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:34.339879  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:34.545065  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:34.806197  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:34.807040  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:34.839708  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:35.044753  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:35.306443  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:35.306820  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:35.340496  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:35.545566  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:35.805589  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:35.807146  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:35.839643  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:36.042780  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:36.046517  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:36.306969  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:36.307835  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:36.339744  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:36.545368  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:36.805459  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:36.807735  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:36.840311  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:37.056293  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:37.306523  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:37.307209  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:37.340656  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:37.545162  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:37.806069  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:37.807063  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:37.839779  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:38.045418  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:38.306322  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:38.308312  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:38.340038  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:38.541080  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:38.544834  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:38.806341  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:38.807404  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:38.840166  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:39.045374  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:39.305637  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:39.308052  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:39.339729  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:39.544994  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:39.806904  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:39.807326  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:39.841004  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:40.045907  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:40.306781  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:40.307718  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:40.339956  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:40.545992  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:40.806967  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:40.807682  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:40.840222  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:41.041951  658394 node_ready.go:58] node "addons-342654" has status "Ready":"False"
	I0906 19:58:41.046083  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:41.306929  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:41.307218  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:41.340470  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:41.545083  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:41.807602  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:41.808441  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:41.839736  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:42.046099  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:42.308019  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:42.308795  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:42.341199  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:42.544901  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:42.805235  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:42.807696  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:42.840505  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:43.045575  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:43.306393  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:43.308363  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:43.340274  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:43.548208  658394 node_ready.go:49] node "addons-342654" has status "Ready":"True"
	I0906 19:58:43.548230  658394 node_ready.go:38] duration metric: took 30.457675558s waiting for node "addons-342654" to be "Ready" ...
	I0906 19:58:43.548240  658394 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:58:43.552557  658394 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 19:58:43.552588  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:43.563973  658394 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kmw8d" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:43.811454  658394 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 19:58:43.811494  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:43.814322  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:43.845014  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:44.054997  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:44.311726  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:44.312209  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:44.340645  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:44.547343  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:44.807726  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:44.809345  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:44.840198  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:45.078484  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:45.314734  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:45.323512  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:45.350503  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:45.547589  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:45.592791  658394 pod_ready.go:102] pod "coredns-5dd5756b68-kmw8d" in "kube-system" namespace has status "Ready":"False"
	I0906 19:58:45.821743  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:45.822556  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:45.840498  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:46.047757  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:46.092815  658394 pod_ready.go:92] pod "coredns-5dd5756b68-kmw8d" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.092840  658394 pod_ready.go:81] duration metric: took 2.528747995s waiting for pod "coredns-5dd5756b68-kmw8d" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.092862  658394 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.116553  658394 pod_ready.go:92] pod "etcd-addons-342654" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.116586  658394 pod_ready.go:81] duration metric: took 23.676283ms waiting for pod "etcd-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.116620  658394 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.127346  658394 pod_ready.go:92] pod "kube-apiserver-addons-342654" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.127372  658394 pod_ready.go:81] duration metric: took 10.736921ms waiting for pod "kube-apiserver-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.127411  658394 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.148196  658394 pod_ready.go:92] pod "kube-controller-manager-addons-342654" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.148230  658394 pod_ready.go:81] duration metric: took 20.794942ms waiting for pod "kube-controller-manager-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.148264  658394 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9gvtg" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.308811  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:46.310313  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:46.351152  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:46.353486  658394 pod_ready.go:92] pod "kube-proxy-9gvtg" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.353510  658394 pod_ready.go:81] duration metric: took 205.23035ms waiting for pod "kube-proxy-9gvtg" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.353524  658394 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.548630  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:46.742108  658394 pod_ready.go:92] pod "kube-scheduler-addons-342654" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:46.742167  658394 pod_ready.go:81] duration metric: took 388.633915ms waiting for pod "kube-scheduler-addons-342654" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.742197  658394 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:46.810343  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:46.811717  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:46.841331  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:47.047645  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:47.307615  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:47.308179  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:47.340553  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:47.553137  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:47.816035  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:47.818852  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:47.841010  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:48.048287  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:48.308000  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:48.308653  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:48.340201  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:48.548191  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:48.808175  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:48.809700  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:48.840786  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:49.048024  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:49.057208  658394 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace has status "Ready":"False"
	I0906 19:58:49.308851  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:49.311186  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:49.340202  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:49.547844  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:49.808244  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:49.810929  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:49.841170  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:50.047331  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:50.306785  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:50.309133  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:50.339818  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:50.547267  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:50.834317  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:50.855229  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:50.856643  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:51.047414  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:51.310065  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:51.313065  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:51.340460  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:51.548009  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:51.557298  658394 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace has status "Ready":"False"
	I0906 19:58:51.808968  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:51.819070  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:51.843246  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:52.049301  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:52.311424  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:52.313248  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:52.340708  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:52.548230  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:52.812765  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:52.813199  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:52.841547  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:53.048550  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:53.310320  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:53.311700  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:53.343844  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:53.548586  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:53.837633  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:53.839892  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:53.852313  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:54.061657  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:54.064275  658394 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace has status "Ready":"False"
	I0906 19:58:54.309337  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:54.310019  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:54.340716  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:54.547773  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:54.553559  658394 pod_ready.go:92] pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace has status "Ready":"True"
	I0906 19:58:54.553585  658394 pod_ready.go:81] duration metric: took 7.811368285s waiting for pod "metrics-server-7c66d45ddc-v8gmm" in "kube-system" namespace to be "Ready" ...
	I0906 19:58:54.553606  658394 pod_ready.go:38] duration metric: took 11.005355095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:58:54.553623  658394 api_server.go:52] waiting for apiserver process to appear ...
	I0906 19:58:54.553683  658394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:58:54.569997  658394 api_server.go:72] duration metric: took 42.043791849s to wait for apiserver process to appear ...
	I0906 19:58:54.570019  658394 api_server.go:88] waiting for apiserver healthz status ...
	I0906 19:58:54.570035  658394 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0906 19:58:54.579145  658394 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0906 19:58:54.580508  658394 api_server.go:141] control plane version: v1.28.1
	I0906 19:58:54.580567  658394 api_server.go:131] duration metric: took 10.5404ms to wait for apiserver health ...
	I0906 19:58:54.580590  658394 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 19:58:54.593107  658394 system_pods.go:59] 17 kube-system pods found
	I0906 19:58:54.593175  658394 system_pods.go:61] "coredns-5dd5756b68-kmw8d" [4a1770dc-783f-4527-9707-bc78d41b7f1e] Running
	I0906 19:58:54.593198  658394 system_pods.go:61] "csi-hostpath-attacher-0" [0c41e3a3-4c29-44c1-a045-f00123ce2da1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 19:58:54.593224  658394 system_pods.go:61] "csi-hostpath-resizer-0" [6fee58b7-bacf-43de-a299-e39d94ef2573] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 19:58:54.593260  658394 system_pods.go:61] "csi-hostpathplugin-t89lf" [d0f4d4e4-0b8e-41cc-a667-607deab29dbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 19:58:54.593287  658394 system_pods.go:61] "etcd-addons-342654" [45834ca9-3dba-4d94-a6bf-56ca7eceebc7] Running
	I0906 19:58:54.593309  658394 system_pods.go:61] "kindnet-cf99k" [01358b4f-89d1-4d10-b454-47f741055da1] Running
	I0906 19:58:54.593330  658394 system_pods.go:61] "kube-apiserver-addons-342654" [33caf48e-6eb1-4238-9750-3564f3927520] Running
	I0906 19:58:54.593364  658394 system_pods.go:61] "kube-controller-manager-addons-342654" [3bebc17e-fd47-41dc-80e9-58f8c4f60321] Running
	I0906 19:58:54.593387  658394 system_pods.go:61] "kube-ingress-dns-minikube" [26cadeaa-f860-448a-b2b2-b97daa013a5c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 19:58:54.593405  658394 system_pods.go:61] "kube-proxy-9gvtg" [89a8b835-c847-401d-9f43-f382e5adc6dd] Running
	I0906 19:58:54.593427  658394 system_pods.go:61] "kube-scheduler-addons-342654" [16c74830-5647-460c-8a7f-3770251f0041] Running
	I0906 19:58:54.593457  658394 system_pods.go:61] "metrics-server-7c66d45ddc-v8gmm" [061eb88b-0263-4464-a3c9-c628c00cc1ab] Running
	I0906 19:58:54.593483  658394 system_pods.go:61] "registry-proxy-v6f29" [1d37360f-66ee-42d0-a616-1b35af8ddf7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 19:58:54.593502  658394 system_pods.go:61] "registry-xx7n9" [d9bc8b32-e703-4760-8ebf-167a7f52b2fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 19:58:54.593569  658394 system_pods.go:61] "snapshot-controller-58dbcc7b99-9ffl6" [0bb7dc9e-aa2d-4106-924c-30db2226a968] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 19:58:54.593599  658394 system_pods.go:61] "snapshot-controller-58dbcc7b99-ddp8n" [513f879b-769b-4843-9538-8433f15fcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 19:58:54.593620  658394 system_pods.go:61] "storage-provisioner" [9edc42a2-7f11-4361-96c4-4dad2ca43fb5] Running
	I0906 19:58:54.593639  658394 system_pods.go:74] duration metric: took 13.031252ms to wait for pod list to return data ...
	I0906 19:58:54.593670  658394 default_sa.go:34] waiting for default service account to be created ...
	I0906 19:58:54.596284  658394 default_sa.go:45] found service account: "default"
	I0906 19:58:54.596303  658394 default_sa.go:55] duration metric: took 2.612575ms for default service account to be created ...
	I0906 19:58:54.596311  658394 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 19:58:54.608045  658394 system_pods.go:86] 17 kube-system pods found
	I0906 19:58:54.608117  658394 system_pods.go:89] "coredns-5dd5756b68-kmw8d" [4a1770dc-783f-4527-9707-bc78d41b7f1e] Running
	I0906 19:58:54.608145  658394 system_pods.go:89] "csi-hostpath-attacher-0" [0c41e3a3-4c29-44c1-a045-f00123ce2da1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 19:58:54.608193  658394 system_pods.go:89] "csi-hostpath-resizer-0" [6fee58b7-bacf-43de-a299-e39d94ef2573] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 19:58:54.608220  658394 system_pods.go:89] "csi-hostpathplugin-t89lf" [d0f4d4e4-0b8e-41cc-a667-607deab29dbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 19:58:54.608242  658394 system_pods.go:89] "etcd-addons-342654" [45834ca9-3dba-4d94-a6bf-56ca7eceebc7] Running
	I0906 19:58:54.608263  658394 system_pods.go:89] "kindnet-cf99k" [01358b4f-89d1-4d10-b454-47f741055da1] Running
	I0906 19:58:54.608294  658394 system_pods.go:89] "kube-apiserver-addons-342654" [33caf48e-6eb1-4238-9750-3564f3927520] Running
	I0906 19:58:54.608320  658394 system_pods.go:89] "kube-controller-manager-addons-342654" [3bebc17e-fd47-41dc-80e9-58f8c4f60321] Running
	I0906 19:58:54.608343  658394 system_pods.go:89] "kube-ingress-dns-minikube" [26cadeaa-f860-448a-b2b2-b97daa013a5c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 19:58:54.608364  658394 system_pods.go:89] "kube-proxy-9gvtg" [89a8b835-c847-401d-9f43-f382e5adc6dd] Running
	I0906 19:58:54.608395  658394 system_pods.go:89] "kube-scheduler-addons-342654" [16c74830-5647-460c-8a7f-3770251f0041] Running
	I0906 19:58:54.608422  658394 system_pods.go:89] "metrics-server-7c66d45ddc-v8gmm" [061eb88b-0263-4464-a3c9-c628c00cc1ab] Running
	I0906 19:58:54.608443  658394 system_pods.go:89] "registry-proxy-v6f29" [1d37360f-66ee-42d0-a616-1b35af8ddf7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 19:58:54.608465  658394 system_pods.go:89] "registry-xx7n9" [d9bc8b32-e703-4760-8ebf-167a7f52b2fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 19:58:54.608502  658394 system_pods.go:89] "snapshot-controller-58dbcc7b99-9ffl6" [0bb7dc9e-aa2d-4106-924c-30db2226a968] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 19:58:54.608537  658394 system_pods.go:89] "snapshot-controller-58dbcc7b99-ddp8n" [513f879b-769b-4843-9538-8433f15fcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 19:58:54.608556  658394 system_pods.go:89] "storage-provisioner" [9edc42a2-7f11-4361-96c4-4dad2ca43fb5] Running
	I0906 19:58:54.608580  658394 system_pods.go:126] duration metric: took 12.261342ms to wait for k8s-apps to be running ...
	I0906 19:58:54.608610  658394 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 19:58:54.608690  658394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:58:54.629593  658394 system_svc.go:56] duration metric: took 20.974544ms WaitForService to wait for kubelet.
	I0906 19:58:54.629665  658394 kubeadm.go:581] duration metric: took 42.103463513s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 19:58:54.629702  658394 node_conditions.go:102] verifying NodePressure condition ...
	I0906 19:58:54.633611  658394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 19:58:54.633688  658394 node_conditions.go:123] node cpu capacity is 2
	I0906 19:58:54.633716  658394 node_conditions.go:105] duration metric: took 3.992421ms to run NodePressure ...
	I0906 19:58:54.633740  658394 start.go:228] waiting for startup goroutines ...
	I0906 19:58:54.807862  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:54.809678  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:54.843158  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:55.048389  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:55.306719  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:55.307665  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:55.339786  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:55.547437  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:55.814026  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:55.815021  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:55.840094  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:56.047541  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:56.307799  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:56.311640  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:56.340693  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:56.547969  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:56.809495  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:56.810696  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:56.840594  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:57.050326  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:57.311829  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:57.316203  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:57.340507  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:57.546524  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:57.814295  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:57.816086  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:57.841660  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:58.047333  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:58.320458  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:58.320715  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:58.343941  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:58.546987  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:58.812470  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:58.812761  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:58.840784  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:59.050501  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:59.321422  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:59.323401  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:59.341252  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:58:59.551978  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:58:59.806365  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:58:59.817976  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:58:59.839683  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:00.073208  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:00.312350  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:00.313951  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:00.350082  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:00.549228  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:00.810533  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:00.823744  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:00.840753  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:01.048114  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:01.308466  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:01.309909  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:01.340541  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:01.548379  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:01.808644  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:01.812183  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:01.840815  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:02.047711  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:02.307672  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:02.309299  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:02.340293  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:02.547245  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:02.806899  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:02.808338  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:02.839982  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:03.047675  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:03.308311  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:03.309773  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:03.344157  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:03.548735  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:03.810352  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:03.810751  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:03.840277  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:04.056907  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:04.309647  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:04.324011  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:04.340001  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:04.547301  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:04.806688  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:04.808127  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:04.839955  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:05.047395  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:05.307745  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:05.310662  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:05.340681  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:05.546971  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:05.807862  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:05.809294  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:05.840639  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:06.048307  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:06.306825  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:06.309569  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:06.341265  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:06.547053  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:06.809051  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:06.812648  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:06.841210  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:07.048234  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:07.306209  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:07.311664  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:07.341479  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:07.555133  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:07.808986  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:07.818276  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:07.843233  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:08.050058  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:08.312534  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:08.318524  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:08.340046  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:08.548077  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:08.832184  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:08.841485  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:08.854714  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:09.073759  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:09.310034  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:09.310785  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:09.344729  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:09.547486  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:09.808124  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:09.809578  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:09.840467  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:10.047496  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:10.307209  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:10.308465  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:10.340516  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:10.551686  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:10.812830  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:10.817243  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:10.842635  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:11.047659  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:11.310962  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:11.311923  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:11.343369  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:11.550903  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:11.808570  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:11.809079  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:11.840134  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:12.047890  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:12.306764  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:12.308084  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:12.339674  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:12.547378  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:12.807033  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:12.808045  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:12.839777  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:13.047690  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:13.307315  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:13.309352  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:13.339993  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:13.546763  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:13.809300  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:13.811215  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:13.843927  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:14.047482  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:14.308755  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:14.310008  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:14.341220  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:14.548826  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:14.819071  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:14.823190  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:14.844026  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:15.058006  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:15.308362  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:15.309760  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:15.340924  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:15.547669  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:15.814217  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:15.815110  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:15.840491  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:16.050976  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:16.315357  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:16.317737  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:16.341394  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:16.547906  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:16.818990  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:16.823374  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:16.840261  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:17.051241  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:17.307161  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:17.309562  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:17.340947  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:17.547069  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:17.806748  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:17.807931  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 19:59:17.840721  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:18.049320  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:18.306285  658394 kapi.go:107] duration metric: took 1m0.532080984s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 19:59:18.307874  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:18.340662  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:18.549269  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:18.807014  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:18.839930  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:19.048500  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:19.307094  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:19.339876  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:19.546598  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:19.808777  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:19.840442  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:20.048519  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:20.307636  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:20.342386  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:20.548186  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:20.808736  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:20.840385  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:21.046844  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:21.307812  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:21.340535  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:21.547247  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:21.810964  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:21.839912  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:22.047005  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:22.308316  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:22.340361  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:22.553850  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:22.810345  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:22.848790  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:23.047506  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:23.308474  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:23.340166  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:23.547429  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:23.807580  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:23.843598  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:24.048791  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:24.308312  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:24.343877  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:24.547750  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:24.813163  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:24.839736  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:25.048389  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:25.313951  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:25.341120  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:25.548271  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:25.808285  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:25.840473  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:26.051368  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:26.307907  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:26.340608  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:26.547333  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:26.808929  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:26.841286  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:27.048124  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:27.308115  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:27.340336  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:27.548315  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:27.811839  658394 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 19:59:27.848118  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:28.051553  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:28.307785  658394 kapi.go:107] duration metric: took 1m10.537963201s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 19:59:28.340932  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:28.548017  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:28.841201  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:29.046960  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:29.340595  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:29.548349  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:29.840587  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:30.069338  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:30.340493  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:30.548781  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:30.840517  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:31.048133  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:31.340728  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:31.547362  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:31.840421  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:32.047479  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:32.340546  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:32.547503  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:32.841221  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:33.047966  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:33.343333  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:33.548642  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:33.853291  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:34.048541  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:34.340757  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:34.547585  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:34.841084  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:35.047390  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:35.342093  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:35.547154  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:35.840133  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:36.048118  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:36.340987  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:36.547659  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:36.840758  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:37.048374  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 19:59:37.339886  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:37.547866  658394 kapi.go:107] duration metric: took 1m19.52593095s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 19:59:37.840999  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:38.340445  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:38.840627  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:39.340355  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:39.840562  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:40.340246  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:40.839669  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:41.341018  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:41.839593  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:42.341010  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:42.839885  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:43.340486  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:43.840575  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:44.339748  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:44.839741  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:45.340753  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:45.839997  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:46.340660  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:46.839814  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:47.339824  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:47.840898  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:48.340049  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:48.839987  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:49.339860  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:49.839623  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:50.340849  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:50.840349  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:51.340805  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:51.840120  658394 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 19:59:52.340387  658394 kapi.go:107] duration metric: took 1m31.042140988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 19:59:52.342462  658394 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-342654 cluster.
	I0906 19:59:52.344326  658394 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 19:59:52.345926  658394 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 19:59:52.347866  658394 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0906 19:59:52.349904  658394 addons.go:502] enable addons completed in 1m40.144641493s: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0906 19:59:52.349948  658394 start.go:233] waiting for cluster config update ...
	I0906 19:59:52.349969  658394 start.go:242] writing updated cluster config ...
	I0906 19:59:52.350313  658394 ssh_runner.go:195] Run: rm -f paused
	I0906 19:59:52.415725  658394 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 19:59:52.417798  658394 out.go:177] * Done! kubectl is now configured to use "addons-342654" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 06 20:03:16 addons-342654 conmon[4569]: conmon 40657f4ede84f58b0054 <ninfo>: container 4580 exited with status 137
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.286454033Z" level=info msg="Stopped container 40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067: ingress-nginx/ingress-nginx-controller-5dcd45b5bf-xhk4k/controller" id=0c6753a0-ca5d-4a4b-b052-3375616e4a12 name=/runtime.v1.RuntimeService/StopContainer
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.286992206Z" level=info msg="Stopping pod sandbox: 8b0d2e5703910e108027bdc38722e536278153825cbfb9e80053d6c66fd18770" id=bae7f7b4-d76a-4879-b542-93176327b974 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.290702142Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-MDJXFTDVXJSTYVB3 - [0:0]\n:KUBE-HP-Z32SFMXKJKALMG4H - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-Z32SFMXKJKALMG4H\n-X KUBE-HP-MDJXFTDVXJSTYVB3\nCOMMIT\n"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.292277664Z" level=info msg="Closing host port tcp:80"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.292326410Z" level=info msg="Closing host port tcp:443"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.293899774Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.293931487Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.294159023Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5dcd45b5bf-xhk4k Namespace:ingress-nginx ID:8b0d2e5703910e108027bdc38722e536278153825cbfb9e80053d6c66fd18770 UID:2c3a795c-e6c4-41af-aad7-fdb28ed3625c NetNS:/var/run/netns/64acc172-bb91-4061-94c2-36f4f95c6adb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.294304393Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5dcd45b5bf-xhk4k from CNI network \"kindnet\" (type=ptp)"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.313314976Z" level=info msg="Stopped pod sandbox: 8b0d2e5703910e108027bdc38722e536278153825cbfb9e80053d6c66fd18770" id=bae7f7b4-d76a-4879-b542-93176327b974 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.328046681Z" level=info msg="Removing container: 40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067" id=d571bfd5-60d9-4bf2-84e2-6e28130bf4ba name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.348180167Z" level=info msg="Removed container 40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067: ingress-nginx/ingress-nginx-controller-5dcd45b5bf-xhk4k/controller" id=d571bfd5-60d9-4bf2-84e2-6e28130bf4ba name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.718625742Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=948dd403-d7f7-47f9-82aa-4c5a50b11a3a name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.718856371Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=948dd403-d7f7-47f9-82aa-4c5a50b11a3a name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.719888993Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=661d3273-a4ef-4e76-933d-22266c892bbd name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.720103557Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=661d3273-a4ef-4e76-933d-22266c892bbd name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.720892906Z" level=info msg="Creating container: default/hello-world-app-5d77478584-ggh9b/hello-world-app" id=d6667bbf-721f-4f4a-b17c-b28d4f5e2219 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.720991154Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.802163152Z" level=info msg="Created container acbf5797fb3160ac2a97e6d6cf122ecf5c5a30e8d22338b0546a244c40fa8ea3: default/hello-world-app-5d77478584-ggh9b/hello-world-app" id=d6667bbf-721f-4f4a-b17c-b28d4f5e2219 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.803154790Z" level=info msg="Starting container: acbf5797fb3160ac2a97e6d6cf122ecf5c5a30e8d22338b0546a244c40fa8ea3" id=f84376ee-ad2a-4457-9859-9374a99132b9 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:03:16 addons-342654 conmon[8029]: conmon acbf5797fb3160ac2a97 <ninfo>: container 8040 exited with status 1
	Sep 06 20:03:16 addons-342654 crio[893]: time="2023-09-06 20:03:16.818440980Z" level=info msg="Started container" PID=8040 containerID=acbf5797fb3160ac2a97e6d6cf122ecf5c5a30e8d22338b0546a244c40fa8ea3 description=default/hello-world-app-5d77478584-ggh9b/hello-world-app id=f84376ee-ad2a-4457-9859-9374a99132b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bedad5641da9fb89cd91f1739c6966f829e41e5b54a2edfbd25e131b79c776c
	Sep 06 20:03:17 addons-342654 crio[893]: time="2023-09-06 20:03:17.333104185Z" level=info msg="Removing container: d25d6b2d301e6a5fd532ce5721b475b5c0375a77f3dfac3ebdc2e4f8377e8a5a" id=3d5fd09e-7f14-407e-a9c8-4425cad0489b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 06 20:03:17 addons-342654 crio[893]: time="2023-09-06 20:03:17.360322330Z" level=info msg="Removed container d25d6b2d301e6a5fd532ce5721b475b5c0375a77f3dfac3ebdc2e4f8377e8a5a: default/hello-world-app-5d77478584-ggh9b/hello-world-app" id=3d5fd09e-7f14-407e-a9c8-4425cad0489b name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	acbf5797fb316       a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb                                                             4 seconds ago       Exited              hello-world-app           2                   5bedad5641da9       hello-world-app-5d77478584-ggh9b
	36fa6871c74e7       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   5285e999a4506       nginx
	12748c26dbd53       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                        3 minutes ago       Running             headlamp                  0                   004cbbbb62370       headlamp-699c48fb74-cfqpg
	2d29e285299b0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   9ac989f08bb6a       gcp-auth-d4c87556c-ptdvk
	b4d5965a62806       8f2588812ab2947d53d2f99b11142e2be088330ec67837bb82801c0d3501af78                                                             4 minutes ago       Exited              patch                     2                   b02098802de81       ingress-nginx-admission-patch-77zxj
	cfe48b31ff113       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              create                    0                   245c5a4b63dda       ingress-nginx-admission-create-78tlt
	88036e7427ac5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   e14c264daca21       coredns-5dd5756b68-kmw8d
	8c7293b75f955       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   843f11e5c6a71       storage-provisioner
	208f136fb126e       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             5 minutes ago       Running             kindnet-cni               0                   fdc27e15cbb32       kindnet-cf99k
	7660564a82f53       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                                             5 minutes ago       Running             kube-proxy                0                   e518f889b30ab       kube-proxy-9gvtg
	f7305dbc3709d       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                                             5 minutes ago       Running             kube-controller-manager   0                   f94b7efc88308       kube-controller-manager-addons-342654
	de0d47b71a1df       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   1cf91670f5d12       etcd-addons-342654
	0e2821dd2e6aa       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                                             5 minutes ago       Running             kube-scheduler            0                   f7002cf329f77       kube-scheduler-addons-342654
	94175b71b7be6       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                                             5 minutes ago       Running             kube-apiserver            0                   2c87b202580a1       kube-apiserver-addons-342654
	
	* 
	* ==> coredns [88036e7427ac5c44a083bc49291b9852266db5d2f8282f097351980378041d44] <==
	* [INFO] 10.244.0.16:58684 - 33809 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051503s
	[INFO] 10.244.0.16:59405 - 2447 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002143644s
	[INFO] 10.244.0.16:58684 - 1226 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006404536s
	[INFO] 10.244.0.16:59405 - 27854 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001963574s
	[INFO] 10.244.0.16:58684 - 11311 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001567719s
	[INFO] 10.244.0.16:59405 - 13737 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000123947s
	[INFO] 10.244.0.16:58684 - 29480 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109867s
	[INFO] 10.244.0.16:36733 - 6080 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000220152s
	[INFO] 10.244.0.16:57032 - 59631 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000158687s
	[INFO] 10.244.0.16:57032 - 6730 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060184s
	[INFO] 10.244.0.16:36733 - 50532 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043503s
	[INFO] 10.244.0.16:36733 - 8904 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006126s
	[INFO] 10.244.0.16:57032 - 53601 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074224s
	[INFO] 10.244.0.16:57032 - 11782 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059101s
	[INFO] 10.244.0.16:57032 - 38390 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006181s
	[INFO] 10.244.0.16:36733 - 7457 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044611s
	[INFO] 10.244.0.16:57032 - 26911 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061169s
	[INFO] 10.244.0.16:36733 - 57010 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055253s
	[INFO] 10.244.0.16:36733 - 26776 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000338683s
	[INFO] 10.244.0.16:57032 - 43200 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001683452s
	[INFO] 10.244.0.16:36733 - 27817 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001352974s
	[INFO] 10.244.0.16:57032 - 27007 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00094952s
	[INFO] 10.244.0.16:57032 - 32801 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071507s
	[INFO] 10.244.0.16:36733 - 14601 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005738035s
	[INFO] 10.244.0.16:36733 - 59612 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000108833s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-342654
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-342654
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=addons-342654
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T19_58_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-342654
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 19:57:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-342654
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:03:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:03:05 +0000   Wed, 06 Sep 2023 19:57:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:03:05 +0000   Wed, 06 Sep 2023 19:57:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:03:05 +0000   Wed, 06 Sep 2023 19:57:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:03:05 +0000   Wed, 06 Sep 2023 19:58:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-342654
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 d47958e4a21c4d7689329bac8ac8a784
	  System UUID:                a08159c7-7320-4676-a10a-c5c0bbfcf7d9
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-ggh9b         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-ptdvk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  headlamp                    headlamp-699c48fb74-cfqpg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 coredns-5dd5756b68-kmw8d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m9s
	  kube-system                 etcd-addons-342654                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 kindnet-cf99k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m10s
	  kube-system                 kube-apiserver-addons-342654             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-addons-342654    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-proxy-9gvtg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-addons-342654             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m4s   kube-proxy       
	  Normal  Starting                 5m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s  kubelet          Node addons-342654 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s  kubelet          Node addons-342654 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s  kubelet          Node addons-342654 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m10s  node-controller  Node addons-342654 event: Registered Node addons-342654 in Controller
	  Normal  NodeReady                4m38s  kubelet          Node addons-342654 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001141] FS-Cache: O-key=[8] 'e0d1c90000000000'
	[  +0.000733] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000029b5a0e8
	[  +0.001054] FS-Cache: N-key=[8] 'e0d1c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=00000017 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=0000000088b82687
	[  +0.001173] FS-Cache: O-key=[8] 'e0d1c90000000000'
	[  +0.000735] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000050869d71
	[  +0.001106] FS-Cache: N-key=[8] 'e0d1c90000000000'
	[  +2.758465] FS-Cache: Duplicate cookie detected
	[  +0.000793] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001082] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=000000002136bb83
	[  +0.001121] FS-Cache: O-key=[8] 'dfd1c90000000000'
	[  +0.000941] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000029b5a0e8
	[  +0.001218] FS-Cache: N-key=[8] 'dfd1c90000000000'
	[  +0.285377] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001065] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=0000000063c06d1b
	[  +0.001196] FS-Cache: O-key=[8] 'e5d1c90000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001086] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000010c9e4c4
	[  +0.001375] FS-Cache: N-key=[8] 'e5d1c90000000000'
	
	* 
	* ==> etcd [de0d47b71a1df80170097a9c98d0e6e610e002feb4f7ce7311b887bb332d7b3b] <==
	* {"level":"info","ts":"2023-09-06T19:57:53.123472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T19:57:53.126152Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:57:53.126474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:57:53.131612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-09-06T19:57:53.132133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:57:53.132272Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:57:53.132337Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:57:53.150103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T19:57:53.150147Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T19:58:12.799939Z","caller":"traceutil/trace.go:171","msg":"trace[1720555223] linearizableReadLoop","detail":"{readStateIndex:363; appliedIndex:363; }","duration":"202.442128ms","start":"2023-09-06T19:58:12.597477Z","end":"2023-09-06T19:58:12.799919Z","steps":["trace[1720555223] 'read index received'  (duration: 202.427999ms)","trace[1720555223] 'applied index is now lower than readState.Index'  (duration: 10.683µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-06T19:58:12.855566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.086779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-6kmv2\" ","response":"range_response_count:1 size:3994"}
	{"level":"info","ts":"2023-09-06T19:58:12.859347Z","caller":"traceutil/trace.go:171","msg":"trace[532789130] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-6kmv2; range_end:; response_count:1; response_revision:355; }","duration":"261.863488ms","start":"2023-09-06T19:58:12.597446Z","end":"2023-09-06T19:58:12.85931Z","steps":["trace[532789130] 'agreement among raft nodes before linearized reading'  (duration: 202.558977ms)","trace[532789130] 'range keys from in-memory index tree'  (duration: 55.476971ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T19:58:12.869321Z","caller":"traceutil/trace.go:171","msg":"trace[1647416989] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"271.733743ms","start":"2023-09-06T19:58:12.597573Z","end":"2023-09-06T19:58:12.869307Z","steps":["trace[1647416989] 'process raft request'  (duration: 271.596873ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T19:58:12.988163Z","caller":"traceutil/trace.go:171","msg":"trace[283525820] linearizableReadLoop","detail":"{readStateIndex:365; appliedIndex:363; }","duration":"188.147708ms","start":"2023-09-06T19:58:12.799993Z","end":"2023-09-06T19:58:12.988141Z","steps":["trace[283525820] 'read index received'  (duration: 69.076001ms)","trace[283525820] 'applied index is now lower than readState.Index'  (duration: 119.070911ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T19:58:12.988281Z","caller":"traceutil/trace.go:171","msg":"trace[1026151212] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"188.445052ms","start":"2023-09-06T19:58:12.799829Z","end":"2023-09-06T19:58:12.988274Z","steps":["trace[1026151212] 'process raft request'  (duration: 177.746047ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T19:58:12.988487Z","caller":"traceutil/trace.go:171","msg":"trace[454730861] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"151.918718ms","start":"2023-09-06T19:58:12.83656Z","end":"2023-09-06T19:58:12.988479Z","steps":["trace[454730861] 'process raft request'  (duration: 151.53919ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T19:58:12.999709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.164557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-node-lease/\" range_end:\"/registry/serviceaccounts/kube-node-lease0\" ","response":"range_response_count:1 size:187"}
	{"level":"info","ts":"2023-09-06T19:58:12.999874Z","caller":"traceutil/trace.go:171","msg":"trace[495808651] range","detail":"{range_begin:/registry/serviceaccounts/kube-node-lease/; range_end:/registry/serviceaccounts/kube-node-lease0; response_count:1; response_revision:358; }","duration":"402.342198ms","start":"2023-09-06T19:58:12.597516Z","end":"2023-09-06T19:58:12.999858Z","steps":["trace[495808651] 'agreement among raft nodes before linearized reading'  (duration: 394.595402ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T19:58:12.999948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-06T19:58:12.597513Z","time spent":"402.423699ms","remote":"127.0.0.1:58980","response type":"/etcdserverpb.KV/Range","request count":0,"request size":88,"response count":1,"response size":211,"request content":"key:\"/registry/serviceaccounts/kube-node-lease/\" range_end:\"/registry/serviceaccounts/kube-node-lease0\" "}
	{"level":"warn","ts":"2023-09-06T19:58:13.005304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.124962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-9gvtg\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-09-06T19:58:13.031796Z","caller":"traceutil/trace.go:171","msg":"trace[446714968] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-9gvtg; range_end:; response_count:1; response_revision:360; }","duration":"154.808938ms","start":"2023-09-06T19:58:12.876955Z","end":"2023-09-06T19:58:13.031764Z","steps":["trace[446714968] 'agreement among raft nodes before linearized reading'  (duration: 127.982809ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T19:58:13.016382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.878935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-09-06T19:58:13.031997Z","caller":"traceutil/trace.go:171","msg":"trace[595578711] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:360; }","duration":"153.505038ms","start":"2023-09-06T19:58:12.878484Z","end":"2023-09-06T19:58:13.031989Z","steps":["trace[595578711] 'agreement among raft nodes before linearized reading'  (duration: 137.840633ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T19:58:13.198267Z","caller":"traceutil/trace.go:171","msg":"trace[1262964830] transaction","detail":"{read_only:false; number_of_response:1; response_revision:361; }","duration":"107.98917ms","start":"2023-09-06T19:58:13.090032Z","end":"2023-09-06T19:58:13.198022Z","steps":["trace[1262964830] 'process raft request'  (duration: 46.491333ms)","trace[1262964830] 'compare'  (duration: 61.447638ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T19:58:13.281664Z","caller":"traceutil/trace.go:171","msg":"trace[1085271454] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"151.19561ms","start":"2023-09-06T19:58:13.130437Z","end":"2023-09-06T19:58:13.281633Z","steps":["trace[1085271454] 'process raft request'  (duration: 151.094933ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [2d29e285299b0c13dbc5befcd10d96fdb474fa68c3891d51699f42fa1c51dfec] <==
	* 2023/09/06 19:59:51 GCP Auth Webhook started!
	2023/09/06 19:59:59 Ready to marshal response ...
	2023/09/06 19:59:59 Ready to write response ...
	2023/09/06 19:59:59 Ready to marshal response ...
	2023/09/06 19:59:59 Ready to write response ...
	2023/09/06 19:59:59 Ready to marshal response ...
	2023/09/06 19:59:59 Ready to write response ...
	2023/09/06 20:00:02 Ready to marshal response ...
	2023/09/06 20:00:02 Ready to write response ...
	2023/09/06 20:00:21 Ready to marshal response ...
	2023/09/06 20:00:21 Ready to write response ...
	2023/09/06 20:00:34 Ready to marshal response ...
	2023/09/06 20:00:34 Ready to write response ...
	2023/09/06 20:00:45 Ready to marshal response ...
	2023/09/06 20:00:45 Ready to write response ...
	2023/09/06 20:02:55 Ready to marshal response ...
	2023/09/06 20:02:55 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:03:21 up  2:42,  0 users,  load average: 0.45, 1.14, 1.69
	Linux addons-342654 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [208f136fb126ed76f56d3c589dcf494d492eaf6d33980d35fe91a925dd6902f0] <==
	* I0906 20:01:13.413716       1 main.go:227] handling current node
	I0906 20:01:23.423015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:01:23.423043       1 main.go:227] handling current node
	I0906 20:01:33.433094       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:01:33.433123       1 main.go:227] handling current node
	I0906 20:01:43.437288       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:01:43.437319       1 main.go:227] handling current node
	I0906 20:01:53.449855       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:01:53.449883       1 main.go:227] handling current node
	I0906 20:02:03.454247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:03.454283       1 main.go:227] handling current node
	I0906 20:02:13.465039       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:13.465067       1 main.go:227] handling current node
	I0906 20:02:23.469259       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:23.469290       1 main.go:227] handling current node
	I0906 20:02:33.477584       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:33.477616       1 main.go:227] handling current node
	I0906 20:02:43.481679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:43.481708       1 main.go:227] handling current node
	I0906 20:02:53.494380       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:02:53.494408       1 main.go:227] handling current node
	I0906 20:03:03.498373       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:03:03.498401       1 main.go:227] handling current node
	I0906 20:03:13.506341       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:03:13.506371       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [94175b71b7be692b2aa5fcbe0f056274880271b0f3cc9b4591ce30242bb97b17] <==
	* E0906 20:01:03.127234       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0906 20:01:03.127451       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0906 20:01:04.008525       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 20:01:04.097980       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 20:01:04.118260       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0906 20:01:06.325337       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:01:16.325663       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:01:26.326103       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:01:36.326439       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:01:46.327463       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:01:55.152962       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0906 20:01:55.152994       1 handler_proxy.go:93] no RequestInfo found in the context
	E0906 20:01:55.153034       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 20:01:55.153042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0906 20:01:56.328363       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:02:06.328859       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:02:16.329445       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:02:26.330113       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:02:36.331297       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:02:46.331743       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0906 20:02:55.999089       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.101.117"}
	E0906 20:02:56.332783       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:03:06.333886       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0906 20:03:16.335024       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [f7305dbc3709d133b2a892b1a79a01f4bcac269fbba6341383a50a86ddc05790] <==
	* W0906 20:02:15.587337       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:02:15.587455       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 20:02:30.017217       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:02:30.017252       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 20:02:55.730484       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0906 20:02:55.772631       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-ggh9b"
	I0906 20:02:55.787107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.286019ms"
	I0906 20:02:55.797345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.109828ms"
	I0906 20:02:55.797630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.618µs"
	I0906 20:02:55.814637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="165.645µs"
	W0906 20:02:57.711979       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:02:57.712145       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 20:02:59.311895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.686µs"
	I0906 20:03:00.305797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.811µs"
	I0906 20:03:01.307278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.407µs"
	W0906 20:03:04.539494       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:03:04.539531       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 20:03:09.211206       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:03:09.211321       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 20:03:13.074182       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0906 20:03:13.079016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="5.014µs"
	I0906 20:03:13.089307       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0906 20:03:17.354658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="97.001µs"
	W0906 20:03:17.961101       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 20:03:17.961233       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [7660564a82f53ebcb437dbfa07a0d916b8cd5fb5261c47248da48e8387584dd2] <==
	* I0906 19:58:16.683430       1 server_others.go:69] "Using iptables proxy"
	I0906 19:58:17.018829       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0906 19:58:17.168832       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 19:58:17.172277       1 server_others.go:152] "Using iptables Proxier"
	I0906 19:58:17.172389       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0906 19:58:17.172423       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0906 19:58:17.172524       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 19:58:17.172770       1 server.go:846] "Version info" version="v1.28.1"
	I0906 19:58:17.173146       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:58:17.173965       1 config.go:188] "Starting service config controller"
	I0906 19:58:17.174282       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 19:58:17.174340       1 config.go:97] "Starting endpoint slice config controller"
	I0906 19:58:17.174370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 19:58:17.174927       1 config.go:315] "Starting node config controller"
	I0906 19:58:17.174976       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 19:58:17.274430       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 19:58:17.275460       1 shared_informer.go:318] Caches are synced for node config
	I0906 19:58:17.274612       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [0e2821dd2e6aaa581afa4070ebb4d8721420eb31a20752f798313aa94f0e3282] <==
	* I0906 19:57:55.505643       1 serving.go:348] Generated self-signed cert in-memory
	W0906 19:57:57.566775       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:57:57.566801       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:57:57.566810       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:57:57.566817       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:57:57.594227       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 19:57:57.594335       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:57:57.596181       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:57:57.596377       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:57:57.597724       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 19:57:57.598262       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0906 19:57:57.601493       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:57:57.601622       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0906 19:57:59.097547       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 06 20:03:06 addons-342654 kubelet[1366]: I0906 20:03:06.717788    1366 scope.go:117] "RemoveContainer" containerID="fa7cd1617a21c1147cccd2d9ce887fdf1305d8c5421dc64650b9cae35ff839cc"
	Sep 06 20:03:06 addons-342654 kubelet[1366]: E0906 20:03:06.718090    1366 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(26cadeaa-f860-448a-b2b2-b97daa013a5c)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="26cadeaa-f860-448a-b2b2-b97daa013a5c"
	Sep 06 20:03:12 addons-342654 kubelet[1366]: I0906 20:03:12.020554    1366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpgbp\" (UniqueName: \"kubernetes.io/projected/26cadeaa-f860-448a-b2b2-b97daa013a5c-kube-api-access-zpgbp\") pod \"26cadeaa-f860-448a-b2b2-b97daa013a5c\" (UID: \"26cadeaa-f860-448a-b2b2-b97daa013a5c\") "
	Sep 06 20:03:12 addons-342654 kubelet[1366]: I0906 20:03:12.025910    1366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cadeaa-f860-448a-b2b2-b97daa013a5c-kube-api-access-zpgbp" (OuterVolumeSpecName: "kube-api-access-zpgbp") pod "26cadeaa-f860-448a-b2b2-b97daa013a5c" (UID: "26cadeaa-f860-448a-b2b2-b97daa013a5c"). InnerVolumeSpecName "kube-api-access-zpgbp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 20:03:12 addons-342654 kubelet[1366]: I0906 20:03:12.121875    1366 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zpgbp\" (UniqueName: \"kubernetes.io/projected/26cadeaa-f860-448a-b2b2-b97daa013a5c-kube-api-access-zpgbp\") on node \"addons-342654\" DevicePath \"\""
	Sep 06 20:03:12 addons-342654 kubelet[1366]: I0906 20:03:12.317182    1366 scope.go:117] "RemoveContainer" containerID="fa7cd1617a21c1147cccd2d9ce887fdf1305d8c5421dc64650b9cae35ff839cc"
	Sep 06 20:03:13 addons-342654 kubelet[1366]: I0906 20:03:13.718557    1366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26cadeaa-f860-448a-b2b2-b97daa013a5c" path="/var/lib/kubelet/pods/26cadeaa-f860-448a-b2b2-b97daa013a5c/volumes"
	Sep 06 20:03:13 addons-342654 kubelet[1366]: I0906 20:03:13.719989    1366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6b3d7531-fcf9-4867-bce2-ac37e0516a75" path="/var/lib/kubelet/pods/6b3d7531-fcf9-4867-bce2-ac37e0516a75/volumes"
	Sep 06 20:03:13 addons-342654 kubelet[1366]: I0906 20:03:13.720906    1366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="95f972fb-3aac-492b-ae28-fd40bea02f2e" path="/var/lib/kubelet/pods/95f972fb-3aac-492b-ae28-fd40bea02f2e/volumes"
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.327082    1366 scope.go:117] "RemoveContainer" containerID="40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067"
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.348433    1366 scope.go:117] "RemoveContainer" containerID="40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067"
	Sep 06 20:03:16 addons-342654 kubelet[1366]: E0906 20:03:16.348873    1366 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067\": container with ID starting with 40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067 not found: ID does not exist" containerID="40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067"
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.348922    1366 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067"} err="failed to get container status \"40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067\": rpc error: code = NotFound desc = could not find container \"40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067\": container with ID starting with 40657f4ede84f58b005411f44d29a52cce30ab8674cf918c6c006e0093dff067 not found: ID does not exist"
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.451930    1366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-webhook-cert\") pod \"2c3a795c-e6c4-41af-aad7-fdb28ed3625c\" (UID: \"2c3a795c-e6c4-41af-aad7-fdb28ed3625c\") "
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.452000    1366 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb8cn\" (UniqueName: \"kubernetes.io/projected/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-kube-api-access-qb8cn\") pod \"2c3a795c-e6c4-41af-aad7-fdb28ed3625c\" (UID: \"2c3a795c-e6c4-41af-aad7-fdb28ed3625c\") "
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.454539    1366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-kube-api-access-qb8cn" (OuterVolumeSpecName: "kube-api-access-qb8cn") pod "2c3a795c-e6c4-41af-aad7-fdb28ed3625c" (UID: "2c3a795c-e6c4-41af-aad7-fdb28ed3625c"). InnerVolumeSpecName "kube-api-access-qb8cn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.455332    1366 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2c3a795c-e6c4-41af-aad7-fdb28ed3625c" (UID: "2c3a795c-e6c4-41af-aad7-fdb28ed3625c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.552612    1366 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qb8cn\" (UniqueName: \"kubernetes.io/projected/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-kube-api-access-qb8cn\") on node \"addons-342654\" DevicePath \"\""
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.552650    1366 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c3a795c-e6c4-41af-aad7-fdb28ed3625c-webhook-cert\") on node \"addons-342654\" DevicePath \"\""
	Sep 06 20:03:16 addons-342654 kubelet[1366]: I0906 20:03:16.717980    1366 scope.go:117] "RemoveContainer" containerID="d25d6b2d301e6a5fd532ce5721b475b5c0375a77f3dfac3ebdc2e4f8377e8a5a"
	Sep 06 20:03:17 addons-342654 kubelet[1366]: I0906 20:03:17.330715    1366 scope.go:117] "RemoveContainer" containerID="d25d6b2d301e6a5fd532ce5721b475b5c0375a77f3dfac3ebdc2e4f8377e8a5a"
	Sep 06 20:03:17 addons-342654 kubelet[1366]: I0906 20:03:17.330945    1366 scope.go:117] "RemoveContainer" containerID="acbf5797fb3160ac2a97e6d6cf122ecf5c5a30e8d22338b0546a244c40fa8ea3"
	Sep 06 20:03:17 addons-342654 kubelet[1366]: E0906 20:03:17.331240    1366 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-ggh9b_default(03e963b3-9c42-4f60-b8c5-40dfb897f78e)\"" pod="default/hello-world-app-5d77478584-ggh9b" podUID="03e963b3-9c42-4f60-b8c5-40dfb897f78e"
	Sep 06 20:03:17 addons-342654 kubelet[1366]: I0906 20:03:17.719141    1366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c3a795c-e6c4-41af-aad7-fdb28ed3625c" path="/var/lib/kubelet/pods/2c3a795c-e6c4-41af-aad7-fdb28ed3625c/volumes"
	Sep 06 20:03:21 addons-342654 kubelet[1366]: E0906 20:03:21.991907    1366 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/90c1604f8037c6f1ea272b2fa852436ddf15ab3a1ba38b3923e93f94208b982d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/90c1604f8037c6f1ea272b2fa852436ddf15ab3a1ba38b3923e93f94208b982d/diff: no such file or directory, extraDiskErr: <nil>
	
	* 
	* ==> storage-provisioner [8c7293b75f955fa1f0a49ca1b594b28b7b38973492bc9a40fd0b89cab9d61a22] <==
	* I0906 19:58:44.259519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:58:44.278469       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:58:44.278730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 19:58:44.317299       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 19:58:44.317632       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-342654_a28a7786-544e-40f2-bf3a-5444434e35ed!
	I0906 19:58:44.317718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6976a3fa-ad94-4e0e-9b0a-ad6aa6510352", APIVersion:"v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-342654_a28a7786-544e-40f2-bf3a-5444434e35ed became leader
	I0906 19:58:44.425903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-342654_a28a7786-544e-40f2-bf3a-5444434e35ed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-342654 -n addons-342654
helpers_test.go:261: (dbg) Run:  kubectl --context addons-342654 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.99s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-949230 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-949230 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.964801642s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-949230 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-949230 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b23e6344-bde1-4fe5-9b44-9568029f8410] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b23e6344-bde1-4fe5-9b44-9568029f8410] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.014503205s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0906 20:12:37.102359  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.107622  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.117917  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.138155  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.178508  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.258811  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.419195  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:37.739708  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:38.380164  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:39.660363  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:42.220604  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:47.340820  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:12:57.581753  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-949230 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.65441736s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-949230 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0906 20:13:18.062150  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021862758s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons disable ingress-dns --alsologtostderr -v=1: (1.29036443s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons disable ingress --alsologtostderr -v=1: (7.583227651s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-949230
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-949230:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57",
	        "Created": "2023-09-06T20:09:05.43568195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 685649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:09:05.7718135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57/hosts",
	        "LogPath": "/var/lib/docker/containers/e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57/e6842fc441fda58407923431711379b51b1d69bbb98d9085e5d062dc4f8f5f57-json.log",
	        "Name": "/ingress-addon-legacy-949230",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-949230:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-949230",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/babe14ab77a996f47512a506ef1e15b26c1695b20e3fb54d67585adf52386b99-init/diff:/var/lib/docker/overlay2/ba2e4d17dafea75bb4f24482e38d11907530383cc2bd79f5b12dd92aeb991448/diff",
	                "MergedDir": "/var/lib/docker/overlay2/babe14ab77a996f47512a506ef1e15b26c1695b20e3fb54d67585adf52386b99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/babe14ab77a996f47512a506ef1e15b26c1695b20e3fb54d67585adf52386b99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/babe14ab77a996f47512a506ef1e15b26c1695b20e3fb54d67585adf52386b99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-949230",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-949230/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-949230",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-949230",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-949230",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61ed1de9db2c0cd911157bb4cc7a43e146b97a0603bb73acdeac9e139cb20f52",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/61ed1de9db2c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-949230": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e6842fc441fd",
	                        "ingress-addon-legacy-949230"
	                    ],
	                    "NetworkID": "86686edaea5fc424587eda56d4fd91e44a354402073c14dd0024020467c6d30a",
	                    "EndpointID": "0b6a5c6ac82834418c39498244a814139e7a71ea7d4903b96e26d3de9fd2bfdc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-949230 -n ingress-addon-legacy-949230
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-949230 logs -n 25: (1.42622773s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-687153 ssh findmnt        | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | -T /mount1                           |                             |         |         |                     |                     |
	| start          | -p functional-687153                 | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| ssh            | functional-687153 ssh findmnt        | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| start          | -p functional-687153                 | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | -p functional-687153                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| ssh            | functional-687153 ssh findmnt        | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-687153                 | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| update-context | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-687153 ssh pgrep          | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-687153 image build -t     | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | localhost/my-image:functional-687153 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-687153 image ls           | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	| image          | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-687153                    | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-687153                 | functional-687153           | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:08 UTC |
	| start          | -p ingress-addon-legacy-949230       | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:08 UTC | 06 Sep 23 20:10 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-949230          | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:10 UTC | 06 Sep 23 20:10 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-949230          | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:10 UTC | 06 Sep 23 20:10 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-949230          | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:10 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-949230 ip       | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:13 UTC | 06 Sep 23 20:13 UTC |
	| addons         | ingress-addon-legacy-949230          | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:13 UTC | 06 Sep 23 20:13 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-949230          | ingress-addon-legacy-949230 | jenkins | v1.31.2 | 06 Sep 23 20:13 UTC | 06 Sep 23 20:13 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 20:08:37
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:08:37.909110  685198 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:08:37.909361  685198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:37.909387  685198 out.go:309] Setting ErrFile to fd 2...
	I0906 20:08:37.909404  685198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:37.909701  685198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:08:37.910163  685198 out.go:303] Setting JSON to false
	I0906 20:08:37.911162  685198 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10072,"bootTime":1694020846,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:08:37.911268  685198 start.go:138] virtualization:  
	I0906 20:08:37.913977  685198 out.go:177] * [ingress-addon-legacy-949230] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:08:37.916263  685198 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:08:37.918187  685198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:08:37.916443  685198 notify.go:220] Checking for updates...
	I0906 20:08:37.920109  685198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:08:37.922357  685198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:08:37.924035  685198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:08:37.925693  685198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:08:37.927534  685198 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:08:37.951934  685198 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:08:37.952026  685198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:08:38.048798  685198 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-06 20:08:38.03810506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:08:38.048915  685198 docker.go:294] overlay module found
	I0906 20:08:38.050885  685198 out.go:177] * Using the docker driver based on user configuration
	I0906 20:08:38.052353  685198 start.go:298] selected driver: docker
	I0906 20:08:38.052374  685198 start.go:902] validating driver "docker" against <nil>
	I0906 20:08:38.052388  685198 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:08:38.053067  685198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:08:38.127295  685198 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-06 20:08:38.117801786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:08:38.127455  685198 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 20:08:38.127746  685198 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:08:38.129819  685198 out.go:177] * Using Docker driver with root privileges
	I0906 20:08:38.131295  685198 cni.go:84] Creating CNI manager for ""
	I0906 20:08:38.131314  685198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:08:38.131325  685198 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 20:08:38.131339  685198 start_flags.go:321] config:
	{Name:ingress-addon-legacy-949230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-949230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:08:38.133900  685198 out.go:177] * Starting control plane node ingress-addon-legacy-949230 in cluster ingress-addon-legacy-949230
	I0906 20:08:38.135378  685198 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:08:38.136975  685198 out.go:177] * Pulling base image ...
	I0906 20:08:38.138550  685198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 20:08:38.138603  685198 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:08:38.155876  685198 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:08:38.155902  685198 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:08:38.209335  685198 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0906 20:08:38.209361  685198 cache.go:57] Caching tarball of preloaded images
	I0906 20:08:38.209525  685198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 20:08:38.211563  685198 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0906 20:08:38.213411  685198 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0906 20:08:38.324398  685198 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0906 20:08:57.327647  685198 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0906 20:08:57.327756  685198 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0906 20:08:58.469544  685198 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0906 20:08:58.469916  685198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/config.json ...
	I0906 20:08:58.469948  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/config.json: {Name:mk2f89be5a82977f5b607c18439ed9cdb0b10895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:08:58.470153  685198 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:08:58.470200  685198 start.go:365] acquiring machines lock for ingress-addon-legacy-949230: {Name:mkccb92f1139ef9a6a91290f0d46bab3df9b7dca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:08:58.470272  685198 start.go:369] acquired machines lock for "ingress-addon-legacy-949230" in 48.033µs
	I0906 20:08:58.470297  685198 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-949230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-949230 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:08:58.470372  685198 start.go:125] createHost starting for "" (driver="docker")
	I0906 20:08:58.472329  685198 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0906 20:08:58.472533  685198 start.go:159] libmachine.API.Create for "ingress-addon-legacy-949230" (driver="docker")
	I0906 20:08:58.472558  685198 client.go:168] LocalClient.Create starting
	I0906 20:08:58.472650  685198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem
	I0906 20:08:58.472686  685198 main.go:141] libmachine: Decoding PEM data...
	I0906 20:08:58.472706  685198 main.go:141] libmachine: Parsing certificate...
	I0906 20:08:58.472767  685198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem
	I0906 20:08:58.472788  685198 main.go:141] libmachine: Decoding PEM data...
	I0906 20:08:58.472801  685198 main.go:141] libmachine: Parsing certificate...
	I0906 20:08:58.473153  685198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-949230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 20:08:58.490588  685198 cli_runner.go:211] docker network inspect ingress-addon-legacy-949230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 20:08:58.490669  685198 network_create.go:281] running [docker network inspect ingress-addon-legacy-949230] to gather additional debugging logs...
	I0906 20:08:58.490693  685198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-949230
	W0906 20:08:58.508941  685198 cli_runner.go:211] docker network inspect ingress-addon-legacy-949230 returned with exit code 1
	I0906 20:08:58.508977  685198 network_create.go:284] error running [docker network inspect ingress-addon-legacy-949230]: docker network inspect ingress-addon-legacy-949230: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-949230 not found
	I0906 20:08:58.508992  685198 network_create.go:286] output of [docker network inspect ingress-addon-legacy-949230]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-949230 not found
	
	** /stderr **
	I0906 20:08:58.509055  685198 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:08:58.526561  685198 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001177320}
	I0906 20:08:58.526607  685198 network_create.go:123] attempt to create docker network ingress-addon-legacy-949230 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 20:08:58.526671  685198 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-949230 ingress-addon-legacy-949230
	I0906 20:08:58.602418  685198 network_create.go:107] docker network ingress-addon-legacy-949230 192.168.49.0/24 created
	I0906 20:08:58.602461  685198 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-949230" container
	I0906 20:08:58.602537  685198 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 20:08:58.618784  685198 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-949230 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-949230 --label created_by.minikube.sigs.k8s.io=true
	I0906 20:08:58.637141  685198 oci.go:103] Successfully created a docker volume ingress-addon-legacy-949230
	I0906 20:08:58.637238  685198 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-949230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-949230 --entrypoint /usr/bin/test -v ingress-addon-legacy-949230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0906 20:09:00.328676  685198 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-949230-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-949230 --entrypoint /usr/bin/test -v ingress-addon-legacy-949230:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib: (1.69139326s)
	I0906 20:09:00.328714  685198 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-949230
	I0906 20:09:00.328746  685198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 20:09:00.328770  685198 kic.go:190] Starting extracting preloaded images to volume ...
	I0906 20:09:00.328865  685198 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-949230:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 20:09:05.346901  685198 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-949230:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (5.01799209s)
	I0906 20:09:05.346949  685198 kic.go:199] duration metric: took 5.018176 seconds to extract preloaded images to volume
	W0906 20:09:05.347105  685198 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 20:09:05.347212  685198 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 20:09:05.417834  685198 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-949230 --name ingress-addon-legacy-949230 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-949230 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-949230 --network ingress-addon-legacy-949230 --ip 192.168.49.2 --volume ingress-addon-legacy-949230:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0906 20:09:05.780328  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Running}}
	I0906 20:09:05.802465  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:05.829651  685198 cli_runner.go:164] Run: docker exec ingress-addon-legacy-949230 stat /var/lib/dpkg/alternatives/iptables
	I0906 20:09:05.931126  685198 oci.go:144] the created container "ingress-addon-legacy-949230" has a running status.
	I0906 20:09:05.931151  685198 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa...
	I0906 20:09:06.375446  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0906 20:09:06.375492  685198 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 20:09:06.408611  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:06.430968  685198 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 20:09:06.430992  685198 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-949230 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 20:09:06.542710  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:06.574779  685198 machine.go:88] provisioning docker machine ...
	I0906 20:09:06.574809  685198 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-949230"
	I0906 20:09:06.574876  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:06.603888  685198 main.go:141] libmachine: Using SSH client type: native
	I0906 20:09:06.604349  685198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33432 <nil> <nil>}
	I0906 20:09:06.604363  685198 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-949230 && echo "ingress-addon-legacy-949230" | sudo tee /etc/hostname
	I0906 20:09:06.790511  685198 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-949230
	
	I0906 20:09:06.790618  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:06.814376  685198 main.go:141] libmachine: Using SSH client type: native
	I0906 20:09:06.814815  685198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33432 <nil> <nil>}
	I0906 20:09:06.814840  685198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-949230' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-949230/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-949230' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:09:06.975787  685198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:09:06.975857  685198 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:09:06.975888  685198 ubuntu.go:177] setting up certificates
	I0906 20:09:06.975950  685198 provision.go:83] configureAuth start
	I0906 20:09:06.976048  685198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-949230
	I0906 20:09:07.000446  685198 provision.go:138] copyHostCerts
	I0906 20:09:07.000492  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:09:07.000527  685198 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:09:07.000536  685198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:09:07.000619  685198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:09:07.000718  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:09:07.000742  685198 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:09:07.000752  685198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:09:07.000789  685198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:09:07.000838  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:09:07.000861  685198 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:09:07.000865  685198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:09:07.000901  685198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:09:07.000954  685198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-949230 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-949230]
	I0906 20:09:07.166017  685198 provision.go:172] copyRemoteCerts
	I0906 20:09:07.166104  685198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:09:07.166145  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:07.184915  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:07.285014  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 20:09:07.285128  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:09:07.315254  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 20:09:07.315326  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:09:07.345115  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 20:09:07.345224  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0906 20:09:07.375238  685198 provision.go:86] duration metric: configureAuth took 399.256344ms
	I0906 20:09:07.375306  685198 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:09:07.375518  685198 config.go:182] Loaded profile config "ingress-addon-legacy-949230": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0906 20:09:07.375631  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:07.394141  685198 main.go:141] libmachine: Using SSH client type: native
	I0906 20:09:07.394607  685198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33432 <nil> <nil>}
	I0906 20:09:07.394630  685198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:09:07.677682  685198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:09:07.677707  685198 machine.go:91] provisioned docker machine in 1.102909907s
	I0906 20:09:07.677719  685198 client.go:171] LocalClient.Create took 9.205156338s
	I0906 20:09:07.677731  685198 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-949230" took 9.205197724s
	I0906 20:09:07.677738  685198 start.go:300] post-start starting for "ingress-addon-legacy-949230" (driver="docker")
	I0906 20:09:07.677747  685198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:09:07.677833  685198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:09:07.677976  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:07.699977  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:07.801495  685198 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:09:07.806397  685198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:09:07.806447  685198 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:09:07.806460  685198 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:09:07.806468  685198 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:09:07.806485  685198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:09:07.806587  685198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:09:07.806712  685198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:09:07.806780  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /etc/ssl/certs/6579002.pem
	I0906 20:09:07.806896  685198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:09:07.817929  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:09:07.847950  685198 start.go:303] post-start completed in 170.197842ms
	I0906 20:09:07.848363  685198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-949230
	I0906 20:09:07.870657  685198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/config.json ...
	I0906 20:09:07.870942  685198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:09:07.870993  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:07.888607  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:07.988543  685198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:09:07.994324  685198 start.go:128] duration metric: createHost completed in 9.523933902s
	I0906 20:09:07.994350  685198 start.go:83] releasing machines lock for "ingress-addon-legacy-949230", held for 9.524065659s
	I0906 20:09:07.994424  685198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-949230
	I0906 20:09:08.017985  685198 ssh_runner.go:195] Run: cat /version.json
	I0906 20:09:08.018066  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:08.018326  685198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:09:08.018392  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:08.047621  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:08.049917  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:08.277869  685198 ssh_runner.go:195] Run: systemctl --version
	I0906 20:09:08.283743  685198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:09:08.433023  685198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:09:08.439509  685198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:09:08.463555  685198 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:09:08.463704  685198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:09:08.501026  685198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 20:09:08.501051  685198 start.go:466] detecting cgroup driver to use...
	I0906 20:09:08.501116  685198 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:09:08.501193  685198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:09:08.519587  685198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:09:08.535102  685198 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:09:08.535232  685198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:09:08.551830  685198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:09:08.569225  685198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:09:08.663065  685198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:09:08.782220  685198 docker.go:212] disabling docker service ...
	I0906 20:09:08.782315  685198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:09:08.806139  685198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:09:08.821322  685198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:09:08.923117  685198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:09:09.033472  685198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:09:09.048894  685198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:09:09.073535  685198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:09:09.073630  685198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:09:09.087764  685198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:09:09.087871  685198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:09:09.102097  685198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:09:09.115762  685198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:09:09.129390  685198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:09:09.142627  685198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:09:09.153489  685198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:09:09.164099  685198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:09.265012  685198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:09:09.396735  685198 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:09:09.396810  685198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:09:09.402239  685198 start.go:534] Will wait 60s for crictl version
	I0906 20:09:09.402306  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:09.407108  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:09:09.460347  685198 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:09:09.460439  685198 ssh_runner.go:195] Run: crio --version
	I0906 20:09:09.502443  685198 ssh_runner.go:195] Run: crio --version
	I0906 20:09:09.551469  685198 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0906 20:09:09.553206  685198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-949230 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:09:09.573166  685198 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0906 20:09:09.578027  685198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:09:09.591708  685198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0906 20:09:09.591782  685198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:09:09.648999  685198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0906 20:09:09.649076  685198 ssh_runner.go:195] Run: which lz4
	I0906 20:09:09.653661  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0906 20:09:09.653774  685198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 20:09:09.658445  685198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:09:09.658487  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0906 20:09:11.720595  685198 crio.go:444] Took 2.066850 seconds to copy over tarball
	I0906 20:09:11.720705  685198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:09:14.447074  685198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.726328639s)
	I0906 20:09:14.447110  685198 crio.go:451] Took 2.726454 seconds to extract the tarball
	I0906 20:09:14.447121  685198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:09:14.682835  685198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:09:14.724490  685198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0906 20:09:14.724521  685198 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:09:14.724629  685198 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:14.724820  685198 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 20:09:14.724886  685198 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 20:09:14.724958  685198 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 20:09:14.725037  685198 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 20:09:14.725119  685198 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:09:14.725185  685198 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0906 20:09:14.725268  685198 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0906 20:09:14.726638  685198 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:14.727068  685198 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0906 20:09:14.727431  685198 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 20:09:14.727924  685198 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 20:09:14.727633  685198 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0906 20:09:14.727683  685198 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 20:09:14.728956  685198 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 20:09:14.729151  685198 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0906 20:09:15.074490  685198 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.074778  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0906 20:09:15.143041  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0906 20:09:15.157603  685198 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.157873  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0906 20:09:15.159811  685198 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.159983  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0906 20:09:15.160834  685198 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.161028  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 20:09:15.165868  685198 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0906 20:09:15.165913  685198 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0906 20:09:15.165966  685198 ssh_runner.go:195] Run: which crictl
	W0906 20:09:15.181512  685198 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.181694  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0906 20:09:15.192881  685198 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.193066  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0906 20:09:15.292462  685198 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0906 20:09:15.292507  685198 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:09:15.292556  685198 ssh_runner.go:195] Run: which crictl
	W0906 20:09:15.304792  685198 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 20:09:15.304956  685198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:15.358800  685198 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0906 20:09:15.358844  685198 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 20:09:15.358891  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.358978  685198 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0906 20:09:15.358996  685198 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 20:09:15.359019  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.359088  685198 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0906 20:09:15.359105  685198 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 20:09:15.359127  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.359186  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0906 20:09:15.359238  685198 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0906 20:09:15.359260  685198 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 20:09:15.359284  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.384111  685198 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0906 20:09:15.384155  685198 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0906 20:09:15.384205  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.384285  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:09:15.548629  685198 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 20:09:15.548716  685198 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:15.548803  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0906 20:09:15.548808  685198 ssh_runner.go:195] Run: which crictl
	I0906 20:09:15.548899  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0906 20:09:15.548939  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0906 20:09:15.549000  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 20:09:15.549085  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0906 20:09:15.549173  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0906 20:09:15.549205  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0906 20:09:15.696090  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0906 20:09:15.696204  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0906 20:09:15.696259  685198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:15.696308  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0906 20:09:15.696381  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0906 20:09:15.696460  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0906 20:09:15.765055  685198 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:09:15.765189  685198 cache_images.go:92] LoadImages completed in 1.040654297s
	W0906 20:09:15.765291  685198 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0906 20:09:15.765398  685198 ssh_runner.go:195] Run: crio config
	I0906 20:09:15.838165  685198 cni.go:84] Creating CNI manager for ""
	I0906 20:09:15.838196  685198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:09:15.838274  685198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:09:15.838302  685198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-949230 NodeName:ingress-addon-legacy-949230 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:09:15.838506  685198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-949230"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:09:15.838627  685198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-949230 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-949230 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:09:15.838730  685198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0906 20:09:15.849767  685198 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:09:15.849865  685198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:09:15.860854  685198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0906 20:09:15.882629  685198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0906 20:09:15.904463  685198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0906 20:09:15.926284  685198 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:09:15.931163  685198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:09:15.945344  685198 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230 for IP: 192.168.49.2
	I0906 20:09:15.945412  685198 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:15.945571  685198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:09:15.945623  685198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:09:15.945676  685198 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key
	I0906 20:09:15.945692  685198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt with IP's: []
	I0906 20:09:16.126497  685198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt ...
	I0906 20:09:16.126563  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: {Name:mk697d93aafa9eccd9823bfbd430ee103051dd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:16.126777  685198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key ...
	I0906 20:09:16.126791  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key: {Name:mka9a2d0751e96a7215ce5326f2a2e7d7346a3a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:16.126882  685198 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key.dd3b5fb2
	I0906 20:09:16.126899  685198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 20:09:16.640932  685198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt.dd3b5fb2 ...
	I0906 20:09:16.640965  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt.dd3b5fb2: {Name:mkafa3658c248da3ac02b3bd430ecc6a29150666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:16.641151  685198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key.dd3b5fb2 ...
	I0906 20:09:16.641164  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key.dd3b5fb2: {Name:mk1ded4a5030c90eba4d3a93569eff3bca4d5b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:16.641248  685198 certs.go:337] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt
	I0906 20:09:16.641326  685198 certs.go:341] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key
	I0906 20:09:16.641385  685198 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.key
	I0906 20:09:16.641400  685198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.crt with IP's: []
	I0906 20:09:18.105236  685198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.crt ...
	I0906 20:09:18.105275  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.crt: {Name:mk8fb6822c84087a6570a3f63f8e1d30bd717379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:18.105512  685198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.key ...
	I0906 20:09:18.105526  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.key: {Name:mk684d434d62deb8f060365efb410786549dca0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:18.105611  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 20:09:18.105631  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 20:09:18.105646  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 20:09:18.105664  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 20:09:18.105675  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 20:09:18.105691  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 20:09:18.105704  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 20:09:18.105717  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 20:09:18.105777  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:09:18.105821  685198 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:09:18.105834  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:09:18.105866  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:09:18.105899  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:09:18.105929  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:09:18.105981  685198 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:09:18.106016  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem -> /usr/share/ca-certificates/657900.pem
	I0906 20:09:18.106032  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /usr/share/ca-certificates/6579002.pem
	I0906 20:09:18.106062  685198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:09:18.106844  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 20:09:18.138685  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:09:18.169025  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:09:18.198500  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:09:18.227958  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:09:18.257891  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:09:18.287769  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:09:18.316999  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:09:18.347547  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:09:18.378378  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:09:18.407682  685198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:09:18.437189  685198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:09:18.459418  685198 ssh_runner.go:195] Run: openssl version
	I0906 20:09:18.466828  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:09:18.478617  685198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:09:18.483593  685198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:09:18.483696  685198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:09:18.492487  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:09:18.504661  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:09:18.517129  685198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:09:18.521981  685198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:09:18.522102  685198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:09:18.531022  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:09:18.543501  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:09:18.555822  685198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:09:18.560830  685198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:09:18.560940  685198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:09:18.569944  685198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:09:18.582113  685198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:09:18.586692  685198 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 20:09:18.586765  685198 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-949230 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-949230 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:09:18.586854  685198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:09:18.586916  685198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:09:18.629401  685198 cri.go:89] found id: ""
	I0906 20:09:18.629471  685198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:09:18.640312  685198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:18.651227  685198 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0906 20:09:18.651346  685198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:18.662697  685198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:18.662749  685198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 20:09:18.720697  685198 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0906 20:09:18.721117  685198 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 20:09:18.774371  685198 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0906 20:09:18.774442  685198 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0906 20:09:18.774479  685198 kubeadm.go:322] OS: Linux
	I0906 20:09:18.774526  685198 kubeadm.go:322] CGROUPS_CPU: enabled
	I0906 20:09:18.774579  685198 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0906 20:09:18.774627  685198 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0906 20:09:18.774676  685198 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0906 20:09:18.774726  685198 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0906 20:09:18.774774  685198 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0906 20:09:18.863716  685198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:18.863898  685198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:18.864044  685198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:19.121297  685198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:19.123436  685198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:19.123539  685198 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 20:09:19.238642  685198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:19.243063  685198 out.go:204]   - Generating certificates and keys ...
	I0906 20:09:19.243206  685198 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 20:09:19.243312  685198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:19.667023  685198 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 20:09:21.321812  685198 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 20:09:22.121817  685198 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 20:09:22.727701  685198 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 20:09:23.467477  685198 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 20:09:23.467899  685198 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-949230 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 20:09:23.659832  685198 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 20:09:23.660264  685198 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-949230 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 20:09:23.974937  685198 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 20:09:24.636803  685198 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 20:09:25.244834  685198 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 20:09:25.245104  685198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:26.783808  685198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:27.116473  685198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:27.618939  685198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:28.336894  685198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:28.338170  685198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:28.340128  685198 out.go:204]   - Booting up control plane ...
	I0906 20:09:28.340235  685198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:28.347161  685198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:28.349719  685198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:28.351652  685198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:28.356830  685198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:40.861748  685198 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502822 seconds
	I0906 20:09:40.861866  685198 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:40.884515  685198 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:41.403877  685198 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:41.404023  685198 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-949230 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0906 20:09:41.922801  685198 kubeadm.go:322] [bootstrap-token] Using token: hl95ly.kburtfysmrwa9hiw
	I0906 20:09:41.925599  685198 out.go:204]   - Configuring RBAC rules ...
	I0906 20:09:41.925726  685198 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:41.931061  685198 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:41.941433  685198 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:41.949574  685198 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:41.953072  685198 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:41.960886  685198 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:41.977031  685198 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:42.306904  685198 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 20:09:42.379610  685198 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 20:09:42.381108  685198 kubeadm.go:322] 
	I0906 20:09:42.381192  685198 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:42.381205  685198 kubeadm.go:322] 
	I0906 20:09:42.381284  685198 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:42.381294  685198 kubeadm.go:322] 
	I0906 20:09:42.381318  685198 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 20:09:42.381378  685198 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:42.381430  685198 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:42.381437  685198 kubeadm.go:322] 
	I0906 20:09:42.381486  685198 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 20:09:42.381561  685198 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:42.381629  685198 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:42.381637  685198 kubeadm.go:322] 
	I0906 20:09:42.381716  685198 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:42.381795  685198 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 20:09:42.381802  685198 kubeadm.go:322] 
	I0906 20:09:42.381880  685198 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hl95ly.kburtfysmrwa9hiw \
	I0906 20:09:42.381983  685198 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a \
	I0906 20:09:42.382009  685198 kubeadm.go:322]     --control-plane 
	I0906 20:09:42.382019  685198 kubeadm.go:322] 
	I0906 20:09:42.382122  685198 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:42.382132  685198 kubeadm.go:322] 
	I0906 20:09:42.382251  685198 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hl95ly.kburtfysmrwa9hiw \
	I0906 20:09:42.382355  685198 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a 
	I0906 20:09:42.385838  685198 kubeadm.go:322] W0906 20:09:18.719773    1226 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 20:09:42.386155  685198 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0906 20:09:42.386313  685198 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:42.386444  685198 kubeadm.go:322] W0906 20:09:28.347482    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 20:09:42.386569  685198 kubeadm.go:322] W0906 20:09:28.350024    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 20:09:42.386586  685198 cni.go:84] Creating CNI manager for ""
	I0906 20:09:42.386594  685198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:09:42.388405  685198 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 20:09:42.390104  685198 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:09:42.395272  685198 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0906 20:09:42.395297  685198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:09:42.419841  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:09:42.858025  685198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:42.858124  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:42.858171  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=ingress-addon-legacy-949230 minikube.k8s.io/updated_at=2023_09_06T20_09_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:43.015051  685198 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:43.015136  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:43.126643  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:43.730083  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:44.230159  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:44.730179  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:45.230631  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:45.729871  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:46.230001  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:46.730065  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:47.230133  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:47.729415  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:48.229502  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:48.730235  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:49.230193  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:49.730173  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:50.229872  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:50.730334  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:51.229549  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:51.729436  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:52.229563  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:52.730172  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:53.229356  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:53.730184  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:54.229574  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:54.729436  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:55.229407  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:55.729427  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:56.229468  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:56.729847  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:57.230261  685198 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:57.357722  685198 kubeadm.go:1081] duration metric: took 14.499675996s to wait for elevateKubeSystemPrivileges.
	I0906 20:09:57.357754  685198 kubeadm.go:406] StartCluster complete in 38.771013483s
	I0906 20:09:57.357772  685198 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:57.357834  685198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:09:57.358564  685198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:57.359364  685198 kapi.go:59] client config for ingress-addon-legacy-949230: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:09:57.360756  685198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 20:09:57.361032  685198 config.go:182] Loaded profile config "ingress-addon-legacy-949230": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0906 20:09:57.361065  685198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 20:09:57.361116  685198 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-949230"
	I0906 20:09:57.361131  685198 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-949230"
	I0906 20:09:57.361170  685198 host.go:66] Checking if "ingress-addon-legacy-949230" exists ...
	I0906 20:09:57.361622  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:57.362367  685198 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 20:09:57.362406  685198 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-949230"
	I0906 20:09:57.362422  685198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-949230"
	I0906 20:09:57.362695  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:57.409078  685198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-949230" context rescaled to 1 replicas
	I0906 20:09:57.409115  685198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:57.411086  685198 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:57.413266  685198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:57.444493  685198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:57.446388  685198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:57.446410  685198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:57.446482  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:57.444059  685198 kapi.go:59] client config for ingress-addon-legacy-949230: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:09:57.449298  685198 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-949230"
	I0906 20:09:57.449341  685198 host.go:66] Checking if "ingress-addon-legacy-949230" exists ...
	I0906 20:09:57.449783  685198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-949230 --format={{.State.Status}}
	I0906 20:09:57.487516  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:57.504415  685198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:57.504441  685198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:57.504500  685198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-949230
	I0906 20:09:57.525756  685198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/ingress-addon-legacy-949230/id_rsa Username:docker}
	I0906 20:09:57.595254  685198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 20:09:57.595996  685198 kapi.go:59] client config for ingress-addon-legacy-949230: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:09:57.596284  685198 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-949230" to be "Ready" ...
	I0906 20:09:57.707944  685198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:57.731560  685198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:58.103244  685198 start.go:907] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0906 20:09:58.276315  685198 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 20:09:58.278130  685198 addons.go:502] enable addons completed in 917.053271ms: enabled=[storage-provisioner default-storageclass]
	I0906 20:09:59.619718  685198 node_ready.go:58] node "ingress-addon-legacy-949230" has status "Ready":"False"
	I0906 20:10:01.621301  685198 node_ready.go:58] node "ingress-addon-legacy-949230" has status "Ready":"False"
	I0906 20:10:04.120450  685198 node_ready.go:58] node "ingress-addon-legacy-949230" has status "Ready":"False"
	I0906 20:10:06.120394  685198 node_ready.go:49] node "ingress-addon-legacy-949230" has status "Ready":"True"
	I0906 20:10:06.120425  685198 node_ready.go:38] duration metric: took 8.524118277s waiting for node "ingress-addon-legacy-949230" to be "Ready" ...
	I0906 20:10:06.120436  685198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:06.128220  685198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:08.138522  685198 pod_ready.go:102] pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-06 20:09:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 20:10:10.142110  685198 pod_ready.go:102] pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:12.640619  685198 pod_ready.go:102] pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:14.141550  685198 pod_ready.go:92] pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.141573  685198 pod_ready.go:81] duration metric: took 8.013314438s waiting for pod "coredns-66bff467f8-wmz6m" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.141585  685198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.146744  685198 pod_ready.go:92] pod "etcd-ingress-addon-legacy-949230" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.146773  685198 pod_ready.go:81] duration metric: took 5.180464ms waiting for pod "etcd-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.146792  685198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.151912  685198 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-949230" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.151943  685198 pod_ready.go:81] duration metric: took 5.140891ms waiting for pod "kube-apiserver-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.151956  685198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.157449  685198 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-949230" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.157478  685198 pod_ready.go:81] duration metric: took 5.515069ms waiting for pod "kube-controller-manager-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.157497  685198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvb4h" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.162382  685198 pod_ready.go:92] pod "kube-proxy-lvb4h" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.162409  685198 pod_ready.go:81] duration metric: took 4.904371ms waiting for pod "kube-proxy-lvb4h" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.162421  685198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.336937  685198 request.go:629] Waited for 174.386783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-949230
	I0906 20:10:14.536108  685198 request.go:629] Waited for 196.302404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-949230
	I0906 20:10:14.539106  685198 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-949230" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:14.539131  685198 pod_ready.go:81] duration metric: took 376.700781ms waiting for pod "kube-scheduler-ingress-addon-legacy-949230" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:14.539144  685198 pod_ready.go:38] duration metric: took 8.418692142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:14.539160  685198 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:14.539223  685198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:14.552267  685198 api_server.go:72] duration metric: took 17.143119858s to wait for apiserver process to appear ...
	I0906 20:10:14.552293  685198 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:14.552309  685198 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0906 20:10:14.561479  685198 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0906 20:10:14.562736  685198 api_server.go:141] control plane version: v1.18.20
	I0906 20:10:14.562759  685198 api_server.go:131] duration metric: took 10.460227ms to wait for apiserver health ...
	I0906 20:10:14.562769  685198 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:14.736121  685198 request.go:629] Waited for 173.259695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:10:14.741927  685198 system_pods.go:59] 8 kube-system pods found
	I0906 20:10:14.741962  685198 system_pods.go:61] "coredns-66bff467f8-wmz6m" [20c698f4-2ca8-4620-9f92-7f4e4c8caf5c] Running
	I0906 20:10:14.741968  685198 system_pods.go:61] "etcd-ingress-addon-legacy-949230" [044bc525-89eb-4836-9452-68286eca02c7] Running
	I0906 20:10:14.741977  685198 system_pods.go:61] "kindnet-77vk4" [cdd57a10-587c-48ef-9e78-bc70484b9b99] Running
	I0906 20:10:14.741982  685198 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-949230" [2d3ed244-034a-4878-8157-4e2d54feaa57] Running
	I0906 20:10:14.741987  685198 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-949230" [8f759b86-0ee9-47c7-8650-192126bd0039] Running
	I0906 20:10:14.741991  685198 system_pods.go:61] "kube-proxy-lvb4h" [f137a2cf-5051-4f2a-bac8-23f39b11b69a] Running
	I0906 20:10:14.742002  685198 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-949230" [a6a223a0-bba5-48a0-9b59-c927aa37707c] Running
	I0906 20:10:14.742010  685198 system_pods.go:61] "storage-provisioner" [795dabde-6411-4b20-81a7-1d0e3a25f979] Running
	I0906 20:10:14.742016  685198 system_pods.go:74] duration metric: took 179.242579ms to wait for pod list to return data ...
	I0906 20:10:14.742026  685198 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:14.936479  685198 request.go:629] Waited for 194.345418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0906 20:10:14.939005  685198 default_sa.go:45] found service account: "default"
	I0906 20:10:14.939035  685198 default_sa.go:55] duration metric: took 197.002571ms for default service account to be created ...
	I0906 20:10:14.939048  685198 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:15.136503  685198 request.go:629] Waited for 197.386292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:10:15.142973  685198 system_pods.go:86] 8 kube-system pods found
	I0906 20:10:15.143010  685198 system_pods.go:89] "coredns-66bff467f8-wmz6m" [20c698f4-2ca8-4620-9f92-7f4e4c8caf5c] Running
	I0906 20:10:15.143018  685198 system_pods.go:89] "etcd-ingress-addon-legacy-949230" [044bc525-89eb-4836-9452-68286eca02c7] Running
	I0906 20:10:15.143023  685198 system_pods.go:89] "kindnet-77vk4" [cdd57a10-587c-48ef-9e78-bc70484b9b99] Running
	I0906 20:10:15.143078  685198 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-949230" [2d3ed244-034a-4878-8157-4e2d54feaa57] Running
	I0906 20:10:15.143094  685198 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-949230" [8f759b86-0ee9-47c7-8650-192126bd0039] Running
	I0906 20:10:15.143100  685198 system_pods.go:89] "kube-proxy-lvb4h" [f137a2cf-5051-4f2a-bac8-23f39b11b69a] Running
	I0906 20:10:15.143106  685198 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-949230" [a6a223a0-bba5-48a0-9b59-c927aa37707c] Running
	I0906 20:10:15.143114  685198 system_pods.go:89] "storage-provisioner" [795dabde-6411-4b20-81a7-1d0e3a25f979] Running
	I0906 20:10:15.143122  685198 system_pods.go:126] duration metric: took 204.068399ms to wait for k8s-apps to be running ...
	I0906 20:10:15.143161  685198 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:15.143242  685198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:15.158812  685198 system_svc.go:56] duration metric: took 15.649341ms WaitForService to wait for kubelet.
	I0906 20:10:15.158900  685198 kubeadm.go:581] duration metric: took 17.749746232s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:10:15.158928  685198 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:15.336364  685198 request.go:629] Waited for 177.340478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0906 20:10:15.339293  685198 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:10:15.339329  685198 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:15.339341  685198 node_conditions.go:105] duration metric: took 180.40828ms to run NodePressure ...
	I0906 20:10:15.339374  685198 start.go:228] waiting for startup goroutines ...
	I0906 20:10:15.339394  685198 start.go:233] waiting for cluster config update ...
	I0906 20:10:15.339404  685198 start.go:242] writing updated cluster config ...
	I0906 20:10:15.339731  685198 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:15.402136  685198 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0906 20:10:15.404378  685198 out.go:177] 
	W0906 20:10:15.406603  685198 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0906 20:10:15.408427  685198 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0906 20:10:15.410369  685198 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-949230" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.719825592Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=472ca737-ebe3-4a55-9738-d784ba21a3be name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.720024311Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=472ca737-ebe3-4a55-9738-d784ba21a3be name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.720885020Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-jgrlq/hello-world-app" id=b3ca51a5-91a3-43d8-b6d6-10c148a377c4 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.720980503Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.821926922Z" level=info msg="Created container 80836a9d45bf3973db243fa14a8bcd482c57274ba058c81968454d4ae9b24540: default/hello-world-app-5f5d8b66bb-jgrlq/hello-world-app" id=b3ca51a5-91a3-43d8-b6d6-10c148a377c4 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.823091466Z" level=info msg="Starting container: 80836a9d45bf3973db243fa14a8bcd482c57274ba058c81968454d4ae9b24540" id=ce7c1cad-eaed-4574-ab41-fa9706b4ac4e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 06 20:13:21 ingress-addon-legacy-949230 conmon[3720]: conmon 80836a9d45bf3973db24 <ninfo>: container 3731 exited with status 1
	Sep 06 20:13:21 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:21.840074440Z" level=info msg="Started container" PID=3731 containerID=80836a9d45bf3973db243fa14a8bcd482c57274ba058c81968454d4ae9b24540 description=default/hello-world-app-5f5d8b66bb-jgrlq/hello-world-app id=ce7c1cad-eaed-4574-ab41-fa9706b4ac4e name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=03b89a8ea4da1397518b9b4a8c5d348ae02531729ab0d4addb525f910e8c2a9f
	Sep 06 20:13:22 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:22.225322815Z" level=info msg="Removing container: 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2" id=5c28746e-3e1b-4885-ba84-a3984a0a1322 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 06 20:13:22 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:22.253050339Z" level=info msg="Removed container 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2: default/hello-world-app-5f5d8b66bb-jgrlq/hello-world-app" id=5c28746e-3e1b-4885-ba84-a3984a0a1322 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.179789968Z" level=warning msg="Stopping container bfa16ca06d0f240def27230775a263c86e9ff9fbe21b575f1f848fffe39cb13d with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=38a3de20-06ae-47c5-bb4b-eec85b34880c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 06 20:13:23 ingress-addon-legacy-949230 conmon[2740]: conmon bfa16ca06d0f240def27 <ninfo>: container 2751 exited with status 137
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.370580973Z" level=info msg="Stopped container bfa16ca06d0f240def27230775a263c86e9ff9fbe21b575f1f848fffe39cb13d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-pctf7/controller" id=38a3de20-06ae-47c5-bb4b-eec85b34880c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.370890084Z" level=info msg="Stopped container bfa16ca06d0f240def27230775a263c86e9ff9fbe21b575f1f848fffe39cb13d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-pctf7/controller" id=207c82bd-a43a-43fd-8188-b2422f4ee3d1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.371665796Z" level=info msg="Stopping pod sandbox: a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5" id=8be8ebd9-2708-4b5b-b0bf-8804d161679e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.371946632Z" level=info msg="Stopping pod sandbox: a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5" id=80401288-54eb-472c-9a3e-214671aa1fe6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.376226396Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-UR3YG4AQXQJB5YJC - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-Y5CDQXE74WZX73FC - [0:0]\n-X KUBE-HP-UR3YG4AQXQJB5YJC\n-X KUBE-HP-Y5CDQXE74WZX73FC\nCOMMIT\n"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.378838831Z" level=info msg="Closing host port tcp:80"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.378888497Z" level=info msg="Closing host port tcp:443"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.380181796Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.380213607Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.380366050Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-pctf7 Namespace:ingress-nginx ID:a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5 UID:f66e332c-fc57-4b80-80d1-9165f1d4483e NetNS:/var/run/netns/8451aa94-86bf-4ea0-af95-33b7fbe768e3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.380517894Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-pctf7 from CNI network \"kindnet\" (type=ptp)"
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.407669747Z" level=info msg="Stopped pod sandbox: a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5" id=8be8ebd9-2708-4b5b-b0bf-8804d161679e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 06 20:13:23 ingress-addon-legacy-949230 crio[893]: time="2023-09-06 20:13:23.407788483Z" level=info msg="Stopped pod sandbox (already stopped): a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5" id=80401288-54eb-472c-9a3e-214671aa1fe6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80836a9d45bf3       a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb                                                   7 seconds ago       Exited              hello-world-app           2                   03b89a8ea4da1       hello-world-app-5f5d8b66bb-jgrlq
	65673441be0e6       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   b621c70464ee0       nginx
	bfa16ca06d0f2       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   a761df37111b8       ingress-nginx-controller-7fcf777cb7-pctf7
	91d44137c4e02       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   aef081ca09621       ingress-nginx-admission-patch-8hgq8
	8158d512bf9d5       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   aa7d6aac6bf8a       ingress-nginx-admission-create-qf9fd
	10726a751c20a       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   44a3e1619bf1d       coredns-66bff467f8-wmz6m
	1261ec9e425ed       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   9f40ed68deda4       storage-provisioner
	50dafa6cf010d       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   e81d944081297       kindnet-77vk4
	5e33f7283e320       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   3f55323999130       kube-proxy-lvb4h
	101b68c7c6532       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   89ca60bb210f5       etcd-ingress-addon-legacy-949230
	92d1d82938558       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   afef96a2ccb3d       kube-apiserver-ingress-addon-legacy-949230
	0898947426b59       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   a4cd3cc987ed8       kube-controller-manager-ingress-addon-legacy-949230
	19be464236487       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   11e8e2665735f       kube-scheduler-ingress-addon-legacy-949230
	
	* 
	* ==> coredns [10726a751c20a301a06360cef39db07b2979ea0c7ea3cd9f54c277bc5e878088] <==
	* [INFO] 10.244.0.5:57234 - 13683 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044661s
	[INFO] 10.244.0.5:41747 - 9166 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002097532s
	[INFO] 10.244.0.5:57234 - 14159 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002005454s
	[INFO] 10.244.0.5:41747 - 42424 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001892051s
	[INFO] 10.244.0.5:57234 - 42769 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00190778s
	[INFO] 10.244.0.5:57234 - 48926 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135959s
	[INFO] 10.244.0.5:41747 - 19738 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000232213s
	[INFO] 10.244.0.5:39808 - 49180 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078638s
	[INFO] 10.244.0.5:35347 - 41258 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066691s
	[INFO] 10.244.0.5:39808 - 29798 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060102s
	[INFO] 10.244.0.5:39808 - 44156 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054023s
	[INFO] 10.244.0.5:35347 - 53606 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112943s
	[INFO] 10.244.0.5:39808 - 3127 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045383s
	[INFO] 10.244.0.5:35347 - 17554 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030277s
	[INFO] 10.244.0.5:35347 - 10637 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053801s
	[INFO] 10.244.0.5:39808 - 647 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002665s
	[INFO] 10.244.0.5:39808 - 24693 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040566s
	[INFO] 10.244.0.5:35347 - 56342 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025141s
	[INFO] 10.244.0.5:35347 - 8717 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047819s
	[INFO] 10.244.0.5:39808 - 22083 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001299953s
	[INFO] 10.244.0.5:35347 - 47926 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000995453s
	[INFO] 10.244.0.5:35347 - 65233 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001051544s
	[INFO] 10.244.0.5:39808 - 39742 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000968663s
	[INFO] 10.244.0.5:35347 - 59592 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049132s
	[INFO] 10.244.0.5:39808 - 13876 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030769s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-949230
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-949230
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=ingress-addon-legacy-949230
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T20_09_42_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 20:09:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-949230
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:13:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:13:15 +0000   Wed, 06 Sep 2023 20:09:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:13:15 +0000   Wed, 06 Sep 2023 20:09:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:13:15 +0000   Wed, 06 Sep 2023 20:09:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:13:15 +0000   Wed, 06 Sep 2023 20:10:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-949230
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 09b73c0d80794288a2d9d4107a582080
	  System UUID:                8234ac82-77ec-48aa-bdf5-a8a25d2b0614
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-jgrlq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-wmz6m                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m32s
	  kube-system                 etcd-ingress-addon-legacy-949230                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kindnet-77vk4                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m32s
	  kube-system                 kube-apiserver-ingress-addon-legacy-949230             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-949230    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-lvb4h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-scheduler-ingress-addon-legacy-949230             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m58s (x4 over 3m58s)  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x5 over 3m58s)  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x4 over 3m58s)  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m44s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s                  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s                  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s                  kubelet     Node ingress-addon-legacy-949230 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m31s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m24s                  kubelet     Node ingress-addon-legacy-949230 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001160] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +0.002380] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000f3c7fb8d
	[  +0.001073] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000829] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000050869d71
	[  +0.001077] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +2.999130] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000da17136c
	[  +0.001217] FS-Cache: O-key=[8] '95d3c90000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001133] FS-Cache: N-key=[8] '95d3c90000000000'
	[  +0.318024] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=000000003cc11187
	[  +0.001164] FS-Cache: O-key=[8] '9bd3c90000000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=00000000302c6dfe
	[  +0.001111] FS-Cache: N-key=[8] '9bd3c90000000000'
	
	* 
	* ==> etcd [101b68c7c653294882c9dabf37f29916443a852dc684dc1984e839fbb31cd4d1] <==
	* raft2023/09/06 20:09:34 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-06 20:09:34.724949 W | auth: simple token is not cryptographically signed
	2023-09-06 20:09:34.762693 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-06 20:09:34.764084 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-06 20:09:34.766611 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-06 20:09:34.766828 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-06 20:09:34.766986 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/06 20:09:34 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-06 20:09:34.767344 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/09/06 20:09:35 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/06 20:09:35 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/06 20:09:35 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/06 20:09:35 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/06 20:09:35 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-06 20:09:35.352020 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-06 20:09:35.353085 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-06 20:09:35.353200 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-06 20:09:35.353266 I | etcdserver: published {Name:ingress-addon-legacy-949230 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-06 20:09:35.353407 I | embed: ready to serve client requests
	2023-09-06 20:09:35.354860 I | embed: serving client requests on 192.168.49.2:2379
	2023-09-06 20:09:35.354982 I | embed: ready to serve client requests
	2023-09-06 20:09:35.356191 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-06 20:09:57.877380 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-wmz6m\" " with result "range_response_count:1 size:3349" took too long (128.978502ms) to execute
	2023-09-06 20:09:58.020416 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-wmz6m\" " with result "range_response_count:1 size:3753" took too long (125.124108ms) to execute
	2023-09-06 20:09:58.021970 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kindnet-77vk4\" " with result "range_response_count:1 size:3821" took too long (114.244271ms) to execute
	
	* 
	* ==> kernel <==
	*  20:13:29 up  2:52,  0 users,  load average: 0.24, 0.97, 1.41
	Linux ingress-addon-legacy-949230 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [50dafa6cf010dd236e03fbc572facd90c14dbf9e7586552239f7cd2b550da3ec] <==
	* I0906 20:11:21.093586       1 main.go:227] handling current node
	I0906 20:11:31.097043       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:11:31.097072       1 main.go:227] handling current node
	I0906 20:11:41.102974       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:11:41.103003       1 main.go:227] handling current node
	I0906 20:11:51.111478       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:11:51.111513       1 main.go:227] handling current node
	I0906 20:12:01.115740       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:01.115770       1 main.go:227] handling current node
	I0906 20:12:11.125197       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:11.125226       1 main.go:227] handling current node
	I0906 20:12:21.134559       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:21.134741       1 main.go:227] handling current node
	I0906 20:12:31.145312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:31.145340       1 main.go:227] handling current node
	I0906 20:12:41.149142       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:41.149172       1 main.go:227] handling current node
	I0906 20:12:51.154711       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:12:51.154745       1 main.go:227] handling current node
	I0906 20:13:01.158177       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:13:01.158207       1 main.go:227] handling current node
	I0906 20:13:11.162194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:13:11.162223       1 main.go:227] handling current node
	I0906 20:13:21.176966       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0906 20:13:21.177195       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [92d1d82938558e5056a14acc3b9ce126758b3a431ad735406a59cf41b603797c] <==
	* I0906 20:09:39.359929       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0906 20:09:39.359948       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0906 20:09:39.479010       1 cache.go:39] Caches are synced for autoregister controller
	I0906 20:09:39.479467       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0906 20:09:39.532873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 20:09:39.534131       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 20:09:39.534288       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0906 20:09:40.230034       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0906 20:09:40.230084       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 20:09:40.235821       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0906 20:09:40.239889       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0906 20:09:40.239917       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0906 20:09:40.672406       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 20:09:40.714360       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0906 20:09:40.874289       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0906 20:09:40.876000       1 controller.go:609] quota admission added evaluator for: endpoints
	I0906 20:09:40.887335       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 20:09:41.744029       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0906 20:09:42.252367       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0906 20:09:42.340316       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0906 20:09:45.652263       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 20:09:57.600523       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0906 20:09:57.616284       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0906 20:10:16.269441       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0906 20:10:42.490665       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0898947426b5960494b3cda538c311f26b0a557c2050c55a973bcf82cac31a60] <==
	* I0906 20:09:57.597137       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0906 20:09:57.602487       1 shared_informer.go:230] Caches are synced for deployment 
	I0906 20:09:57.624744       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dd00a6e0-4656-4f52-ad5b-505346186050", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-lvb4h
	I0906 20:09:57.648750       1 shared_informer.go:230] Caches are synced for job 
	I0906 20:09:57.659257       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"31b61633-32f8-4347-b54c-aff732576683", APIVersion:"apps/v1", ResourceVersion:"231", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-77vk4
	I0906 20:09:57.659653       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"14d9409d-d56b-4fe4-8103-21bdf1377b72", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0906 20:09:57.664940       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8685e821-22a1-44c2-8860-41279fe62895", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-wmz6m
	I0906 20:09:57.693214       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 20:09:57.701256       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 20:09:57.721702       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0906 20:09:57.735230       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 20:09:57.735325       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 20:09:57.797514       1 shared_informer.go:230] Caches are synced for garbage collector 
	E0906 20:09:58.056998       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"31b61633-32f8-4347-b54c-aff732576683", ResourceVersion:"231", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63829627782, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230511-dc714da8\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400193fa80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400193faa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400193fac0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400193fae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400193fb00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400193fb20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230511-dc714da8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400193fb40)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400193fb80)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001220780), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000ddb3f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003899d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000bbc70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000ddb450)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0906 20:10:07.211018       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0906 20:10:16.253452       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"14d2a773-dd4d-4eec-8c63-97545200d8c5", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0906 20:10:16.282301       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"6750307a-1316-4bf7-8dbd-1aab9478ee51", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-pctf7
	I0906 20:10:16.291450       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6e99eebd-8721-477b-8ecf-57d845a6f6a6", APIVersion:"batch/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-qf9fd
	I0906 20:10:16.342444       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9abf8508-43e9-453f-b4bf-5616ec741066", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-8hgq8
	I0906 20:10:18.815497       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6e99eebd-8721-477b-8ecf-57d845a6f6a6", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 20:10:19.807161       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9abf8508-43e9-453f-b4bf-5616ec741066", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 20:13:03.572490       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"da6cad37-8824-4798-8c51-d8abfc3b1d6c", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0906 20:13:03.593555       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"639321d9-c496-4296-9458-a3f83f5c4578", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-jgrlq
	E0906 20:13:25.906514       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-mt47p" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [5e33f7283e3200040308b1abffd6d69e06c87ec911dcfe773f96bb7bb65e57fc] <==
	* W0906 20:09:58.344622       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0906 20:09:58.356385       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0906 20:09:58.356522       1 server_others.go:186] Using iptables Proxier.
	I0906 20:09:58.356900       1 server.go:583] Version: v1.18.20
	I0906 20:09:58.358026       1 config.go:315] Starting service config controller
	I0906 20:09:58.358169       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0906 20:09:58.358301       1 config.go:133] Starting endpoints config controller
	I0906 20:09:58.358333       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0906 20:09:58.458431       1 shared_informer.go:230] Caches are synced for service config 
	I0906 20:09:58.458479       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [19be464236487b3f4dfb430c19c4e93cb81a99f62c4ae28a636c60d433f18288] <==
	* I0906 20:09:39.454586       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 20:09:39.454683       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 20:09:39.456586       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0906 20:09:39.456791       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:09:39.456827       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:09:39.456881       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0906 20:09:39.465810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:09:39.466011       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 20:09:39.466202       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:39.466418       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:39.466459       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 20:09:39.466632       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:09:39.466736       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:39.466852       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:39.466918       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 20:09:39.466990       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:09:39.467142       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 20:09:39.470018       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:09:40.316871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:40.319532       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:40.387856       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:40.444493       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0906 20:09:42.156997       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0906 20:09:58.175055       1 factory.go:503] pod kube-system/coredns-66bff467f8-wmz6m is already present in the backoff queue
	E0906 20:09:58.294471       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Sep 06 20:13:08 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:08.200286    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb627375b17090a01d09fb7b54f200302dca477e2304292e0cd742cf7869f06f
	Sep 06 20:13:08 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:08.200975    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2
	Sep 06 20:13:08 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:08.201233    1649 pod_workers.go:191] Error syncing pod 233320c2-a42f-4f88-a45e-8a73ccd054f0 ("hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"
	Sep 06 20:13:09 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:09.203111    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2
	Sep 06 20:13:09 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:09.203385    1649 pod_workers.go:191] Error syncing pod 233320c2-a42f-4f88-a45e-8a73ccd054f0 ("hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"
	Sep 06 20:13:17 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:17.718574    1649 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 06 20:13:17 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:17.718619    1649 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 06 20:13:17 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:17.718663    1649 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 06 20:13:17 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:17.718695    1649 pod_workers.go:191] Error syncing pod 0d342821-0b1f-4534-9ff7-7385970afbb2 ("kube-ingress-dns-minikube_kube-system(0d342821-0b1f-4534-9ff7-7385970afbb2)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 06 20:13:19 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:19.582992    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-47ks5" (UniqueName: "kubernetes.io/secret/0d342821-0b1f-4534-9ff7-7385970afbb2-minikube-ingress-dns-token-47ks5") pod "0d342821-0b1f-4534-9ff7-7385970afbb2" (UID: "0d342821-0b1f-4534-9ff7-7385970afbb2")
	Sep 06 20:13:19 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:19.587890    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d342821-0b1f-4534-9ff7-7385970afbb2-minikube-ingress-dns-token-47ks5" (OuterVolumeSpecName: "minikube-ingress-dns-token-47ks5") pod "0d342821-0b1f-4534-9ff7-7385970afbb2" (UID: "0d342821-0b1f-4534-9ff7-7385970afbb2"). InnerVolumeSpecName "minikube-ingress-dns-token-47ks5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 20:13:19 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:19.683337    1649 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-47ks5" (UniqueName: "kubernetes.io/secret/0d342821-0b1f-4534-9ff7-7385970afbb2-minikube-ingress-dns-token-47ks5") on node "ingress-addon-legacy-949230" DevicePath ""
	Sep 06 20:13:21 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:21.163319    1649 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pctf7.17826866ef6cb657", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pctf7", UID:"f66e332c-fc57-4b80-80d1-9165f1d4483e", APIVersion:"v1", ResourceVersion:"469", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-949230"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc136553849642c57, ext:218965501667, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc136553849642c57, ext:218965501667, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pctf7.17826866ef6cb657" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 20:13:21 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:21.174837    1649 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pctf7.17826866ef6cb657", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pctf7", UID:"f66e332c-fc57-4b80-80d1-9165f1d4483e", APIVersion:"v1", ResourceVersion:"469", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-949230"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc136553849642c57, ext:218965501667, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13655384a122011, ext:218976901798, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pctf7.17826866ef6cb657" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 20:13:21 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:21.717681    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2
	Sep 06 20:13:22 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:22.223365    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f8de56314ddd76473b6d0c61970acdda187c590231115afefb2f4be025970f2
	Sep 06 20:13:22 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:22.223619    1649 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 80836a9d45bf3973db243fa14a8bcd482c57274ba058c81968454d4ae9b24540
	Sep 06 20:13:22 ingress-addon-legacy-949230 kubelet[1649]: E0906 20:13:22.223861    1649 pod_workers.go:191] Error syncing pod 233320c2-a42f-4f88-a45e-8a73ccd054f0 ("hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jgrlq_default(233320c2-a42f-4f88-a45e-8a73ccd054f0)"
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.593429    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-jdnhn" (UniqueName: "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-ingress-nginx-token-jdnhn") pod "f66e332c-fc57-4b80-80d1-9165f1d4483e" (UID: "f66e332c-fc57-4b80-80d1-9165f1d4483e")
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.593494    1649 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-webhook-cert") pod "f66e332c-fc57-4b80-80d1-9165f1d4483e" (UID: "f66e332c-fc57-4b80-80d1-9165f1d4483e")
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.598308    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f66e332c-fc57-4b80-80d1-9165f1d4483e" (UID: "f66e332c-fc57-4b80-80d1-9165f1d4483e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.600100    1649 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-ingress-nginx-token-jdnhn" (OuterVolumeSpecName: "ingress-nginx-token-jdnhn") pod "f66e332c-fc57-4b80-80d1-9165f1d4483e" (UID: "f66e332c-fc57-4b80-80d1-9165f1d4483e"). InnerVolumeSpecName "ingress-nginx-token-jdnhn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.693838    1649 reconciler.go:319] Volume detached for volume "ingress-nginx-token-jdnhn" (UniqueName: "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-ingress-nginx-token-jdnhn") on node "ingress-addon-legacy-949230" DevicePath ""
	Sep 06 20:13:23 ingress-addon-legacy-949230 kubelet[1649]: I0906 20:13:23.693888    1649 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f66e332c-fc57-4b80-80d1-9165f1d4483e-webhook-cert") on node "ingress-addon-legacy-949230" DevicePath ""
	Sep 06 20:13:24 ingress-addon-legacy-949230 kubelet[1649]: W0906 20:13:24.229083    1649 pod_container_deletor.go:77] Container "a761df37111b803035e509cce8a603a7e0ca85ab019ca31da1d7727cfaeaaed5" not found in pod's containers
	
	* 
	* ==> storage-provisioner [1261ec9e425ed3f58820e2490a76cfdb6f0215323cf43dc3ff01997cc12ae46e] <==
	* I0906 20:10:10.631008       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:10:10.662022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:10:10.662152       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:10:10.669689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:10:10.670704       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edbee7e6-eba3-4b41-88ad-58455e6d6189", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-949230_70aabd2e-d61f-4712-9002-5dc0d83316e1 became leader
	I0906 20:10:10.670839       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-949230_70aabd2e-d61f-4712-9002-5dc0d83316e1!
	I0906 20:10:10.771763       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-949230_70aabd2e-d61f-4712-9002-5dc0d83316e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-949230 -n ingress-addon-legacy-949230
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-949230 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- sh -c "ping -c 1 192.168.58.1": exit status 1 (256.198448ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-pwl5s): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (231.614378ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-thpl6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-782472
helpers_test.go:235: (dbg) docker inspect multinode-782472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a",
	        "Created": "2023-09-06T20:19:58.804848314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:19:59.137570893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/hosts",
	        "LogPath": "/var/lib/docker/containers/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a-json.log",
	        "Name": "/multinode-782472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-782472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-782472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/940218807f716d3dce3942d8c0b004bfc9d79b2e46e4c2d2269a3f3041dec5f2-init/diff:/var/lib/docker/overlay2/ba2e4d17dafea75bb4f24482e38d11907530383cc2bd79f5b12dd92aeb991448/diff",
	                "MergedDir": "/var/lib/docker/overlay2/940218807f716d3dce3942d8c0b004bfc9d79b2e46e4c2d2269a3f3041dec5f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/940218807f716d3dce3942d8c0b004bfc9d79b2e46e4c2d2269a3f3041dec5f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/940218807f716d3dce3942d8c0b004bfc9d79b2e46e4c2d2269a3f3041dec5f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-782472",
	                "Source": "/var/lib/docker/volumes/multinode-782472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-782472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-782472",
	                "name.minikube.sigs.k8s.io": "multinode-782472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c2aa9743adfdaac885cba756c05cb5d098d94b70e934d605ffcc1879651bcbc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4c2aa9743adf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-782472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4f96b0b3ad5d",
	                        "multinode-782472"
	                    ],
	                    "NetworkID": "35fe0716e990c5c483c08c057d77c1ae994e29a191e9479ec2ff6286dcb828f8",
	                    "EndpointID": "b7b00e53d906d2a015dec327cce04080d70bfbba96f0ed7a850dda422162ed84",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-782472 -n multinode-782472
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-782472 logs -n 25: (1.704350722s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-813478                           | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-813478 ssh -- ls                    | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-811361                           | mount-start-1-811361 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-813478 ssh -- ls                    | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-813478                           | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	| start   | -p mount-start-2-813478                           | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	| ssh     | mount-start-2-813478 ssh -- ls                    | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-813478                           | mount-start-2-813478 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	| delete  | -p mount-start-1-811361                           | mount-start-1-811361 | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:19 UTC |
	| start   | -p multinode-782472                               | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:19 UTC | 06 Sep 23 20:21 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- apply -f                   | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- rollout                    | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- get pods -o                | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- get pods -o                | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-pwl5s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-thpl6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-pwl5s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-thpl6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-pwl5s -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-thpl6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- get pods -o                | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-pwl5s                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC |                     |
	|         | busybox-5bc68d56bd-pwl5s -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC | 06 Sep 23 20:21 UTC |
	|         | busybox-5bc68d56bd-thpl6                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-782472 -- exec                       | multinode-782472     | jenkins | v1.31.2 | 06 Sep 23 20:21 UTC |                     |
	|         | busybox-5bc68d56bd-thpl6 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 20:19:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:19:53.382984  721676 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:19:53.383124  721676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:19:53.383134  721676 out.go:309] Setting ErrFile to fd 2...
	I0906 20:19:53.383140  721676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:19:53.383406  721676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:19:53.383804  721676 out.go:303] Setting JSON to false
	I0906 20:19:53.384813  721676 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10748,"bootTime":1694020846,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:19:53.384884  721676 start.go:138] virtualization:  
	I0906 20:19:53.387331  721676 out.go:177] * [multinode-782472] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:19:53.389710  721676 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:19:53.391554  721676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:19:53.389813  721676 notify.go:220] Checking for updates...
	I0906 20:19:53.393543  721676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:19:53.395289  721676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:19:53.397412  721676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:19:53.399092  721676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:19:53.400912  721676 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:19:53.426502  721676 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:19:53.426604  721676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:19:53.521233  721676 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-06 20:19:53.510763424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:19:53.521351  721676 docker.go:294] overlay module found
	I0906 20:19:53.524471  721676 out.go:177] * Using the docker driver based on user configuration
	I0906 20:19:53.526131  721676 start.go:298] selected driver: docker
	I0906 20:19:53.526148  721676 start.go:902] validating driver "docker" against <nil>
	I0906 20:19:53.526164  721676 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:19:53.526786  721676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:19:53.593019  721676 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-06 20:19:53.583553661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:19:53.593169  721676 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 20:19:53.593392  721676 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:19:53.594939  721676 out.go:177] * Using Docker driver with root privileges
	I0906 20:19:53.596514  721676 cni.go:84] Creating CNI manager for ""
	I0906 20:19:53.596527  721676 cni.go:136] 0 nodes found, recommending kindnet
	I0906 20:19:53.596544  721676 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 20:19:53.596561  721676 start_flags.go:321] config:
	{Name:multinode-782472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:19:53.598446  721676 out.go:177] * Starting control plane node multinode-782472 in cluster multinode-782472
	I0906 20:19:53.600099  721676 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:19:53.601769  721676 out.go:177] * Pulling base image ...
	I0906 20:19:53.603460  721676 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:19:53.603515  721676 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 20:19:53.603527  721676 cache.go:57] Caching tarball of preloaded images
	I0906 20:19:53.603540  721676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:19:53.603603  721676 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 20:19:53.603613  721676 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 20:19:53.603952  721676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json ...
	I0906 20:19:53.603981  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json: {Name:mkd9539a3b59fb66b3fcf0b7b2fbe54252897319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:19:53.621005  721676 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:19:53.621029  721676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:19:53.621055  721676 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:19:53.621090  721676 start.go:365] acquiring machines lock for multinode-782472: {Name:mk932ca98e451103ded9042aff9d6d4501feceea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:19:53.621224  721676 start.go:369] acquired machines lock for "multinode-782472" in 105.042µs
	I0906 20:19:53.621252  721676 start.go:93] Provisioning new machine with config: &{Name:multinode-782472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:19:53.621367  721676 start.go:125] createHost starting for "" (driver="docker")
	I0906 20:19:53.623385  721676 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 20:19:53.623642  721676 start.go:159] libmachine.API.Create for "multinode-782472" (driver="docker")
	I0906 20:19:53.623670  721676 client.go:168] LocalClient.Create starting
	I0906 20:19:53.623748  721676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem
	I0906 20:19:53.623784  721676 main.go:141] libmachine: Decoding PEM data...
	I0906 20:19:53.623805  721676 main.go:141] libmachine: Parsing certificate...
	I0906 20:19:53.623874  721676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem
	I0906 20:19:53.623897  721676 main.go:141] libmachine: Decoding PEM data...
	I0906 20:19:53.623912  721676 main.go:141] libmachine: Parsing certificate...
	I0906 20:19:53.624305  721676 cli_runner.go:164] Run: docker network inspect multinode-782472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 20:19:53.642244  721676 cli_runner.go:211] docker network inspect multinode-782472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 20:19:53.642339  721676 network_create.go:281] running [docker network inspect multinode-782472] to gather additional debugging logs...
	I0906 20:19:53.642362  721676 cli_runner.go:164] Run: docker network inspect multinode-782472
	W0906 20:19:53.663537  721676 cli_runner.go:211] docker network inspect multinode-782472 returned with exit code 1
	I0906 20:19:53.663574  721676 network_create.go:284] error running [docker network inspect multinode-782472]: docker network inspect multinode-782472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-782472 not found
	I0906 20:19:53.663594  721676 network_create.go:286] output of [docker network inspect multinode-782472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-782472 not found
	
	** /stderr **
	I0906 20:19:53.663655  721676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:19:53.682928  721676 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f4f092eb4771 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:82:b7:f8:ad} reservation:<nil>}
	I0906 20:19:53.683342  721676 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000dfbdc0}
	I0906 20:19:53.683370  721676 network_create.go:123] attempt to create docker network multinode-782472 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0906 20:19:53.683431  721676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-782472 multinode-782472
	I0906 20:19:53.764081  721676 network_create.go:107] docker network multinode-782472 192.168.58.0/24 created
	I0906 20:19:53.764112  721676 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-782472" container
	I0906 20:19:53.764197  721676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 20:19:53.781192  721676 cli_runner.go:164] Run: docker volume create multinode-782472 --label name.minikube.sigs.k8s.io=multinode-782472 --label created_by.minikube.sigs.k8s.io=true
	I0906 20:19:53.800851  721676 oci.go:103] Successfully created a docker volume multinode-782472
	I0906 20:19:53.800946  721676 cli_runner.go:164] Run: docker run --rm --name multinode-782472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-782472 --entrypoint /usr/bin/test -v multinode-782472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0906 20:19:54.437285  721676 oci.go:107] Successfully prepared a docker volume multinode-782472
	I0906 20:19:54.437320  721676 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:19:54.437342  721676 kic.go:190] Starting extracting preloaded images to volume ...
	I0906 20:19:54.437436  721676 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-782472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 20:19:58.723403  721676 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-782472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.285925792s)
	I0906 20:19:58.723444  721676 kic.go:199] duration metric: took 4.286092 seconds to extract preloaded images to volume
	W0906 20:19:58.723596  721676 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 20:19:58.723714  721676 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 20:19:58.788697  721676 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-782472 --name multinode-782472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-782472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-782472 --network multinode-782472 --ip 192.168.58.2 --volume multinode-782472:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0906 20:19:59.145999  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Running}}
	I0906 20:19:59.181363  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:19:59.205560  721676 cli_runner.go:164] Run: docker exec multinode-782472 stat /var/lib/dpkg/alternatives/iptables
	I0906 20:19:59.273389  721676 oci.go:144] the created container "multinode-782472" has a running status.
	I0906 20:19:59.273428  721676 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa...
	I0906 20:19:59.632737  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0906 20:19:59.632786  721676 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 20:19:59.653309  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:19:59.671616  721676 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 20:19:59.671636  721676 kic_runner.go:114] Args: [docker exec --privileged multinode-782472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 20:19:59.763517  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:19:59.790800  721676 machine.go:88] provisioning docker machine ...
	I0906 20:19:59.790827  721676 ubuntu.go:169] provisioning hostname "multinode-782472"
	I0906 20:19:59.790906  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:19:59.823710  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:19:59.824717  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33492 <nil> <nil>}
	I0906 20:19:59.824737  721676 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-782472 && echo "multinode-782472" | sudo tee /etc/hostname
	I0906 20:19:59.825364  721676 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33556->127.0.0.1:33492: read: connection reset by peer
	I0906 20:20:02.981742  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-782472
	
	I0906 20:20:02.981832  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:03.003502  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:20:03.004003  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33492 <nil> <nil>}
	I0906 20:20:03.004028  721676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-782472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-782472/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-782472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:20:03.147757  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:20:03.147786  721676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:20:03.147812  721676 ubuntu.go:177] setting up certificates
	I0906 20:20:03.147840  721676 provision.go:83] configureAuth start
	I0906 20:20:03.147908  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472
	I0906 20:20:03.167159  721676 provision.go:138] copyHostCerts
	I0906 20:20:03.167208  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:20:03.167242  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:20:03.167253  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:20:03.167336  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:20:03.167427  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:20:03.167449  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:20:03.167456  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:20:03.167486  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:20:03.167537  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:20:03.167559  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:20:03.167564  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:20:03.167595  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:20:03.167646  721676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.multinode-782472 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-782472]
	I0906 20:20:03.949186  721676 provision.go:172] copyRemoteCerts
	I0906 20:20:03.949267  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:20:03.949313  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:03.968224  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:04.077344  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 20:20:04.077411  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:20:04.109853  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 20:20:04.109929  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:20:04.140511  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 20:20:04.140589  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:20:04.170227  721676 provision.go:86] duration metric: configureAuth took 1.022370064s
	I0906 20:20:04.170291  721676 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:20:04.170514  721676 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:20:04.170627  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:04.188984  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:20:04.189418  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33492 <nil> <nil>}
	I0906 20:20:04.189440  721676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:20:04.445014  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:20:04.445106  721676 machine.go:91] provisioned docker machine in 4.65428882s
	I0906 20:20:04.445133  721676 client.go:171] LocalClient.Create took 10.821452692s
	I0906 20:20:04.445192  721676 start.go:167] duration metric: libmachine.API.Create for "multinode-782472" took 10.821549323s
	I0906 20:20:04.445213  721676 start.go:300] post-start starting for "multinode-782472" (driver="docker")
	I0906 20:20:04.445236  721676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:20:04.445344  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:20:04.445432  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:04.463793  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:04.565493  721676 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:20:04.569980  721676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0906 20:20:04.570005  721676 command_runner.go:130] > NAME="Ubuntu"
	I0906 20:20:04.570012  721676 command_runner.go:130] > VERSION_ID="22.04"
	I0906 20:20:04.570020  721676 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0906 20:20:04.570025  721676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0906 20:20:04.570029  721676 command_runner.go:130] > ID=ubuntu
	I0906 20:20:04.570035  721676 command_runner.go:130] > ID_LIKE=debian
	I0906 20:20:04.570040  721676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 20:20:04.570090  721676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 20:20:04.570113  721676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 20:20:04.570122  721676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 20:20:04.570127  721676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0906 20:20:04.570201  721676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:20:04.570257  721676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:20:04.570276  721676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:20:04.570284  721676 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:20:04.570297  721676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:20:04.570365  721676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:20:04.570455  721676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:20:04.570465  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /etc/ssl/certs/6579002.pem
	I0906 20:20:04.570571  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:20:04.581652  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:20:04.610896  721676 start.go:303] post-start completed in 165.655252ms
	I0906 20:20:04.611263  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472
	I0906 20:20:04.630021  721676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json ...
	I0906 20:20:04.630317  721676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:20:04.630372  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:04.648518  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:04.744758  721676 command_runner.go:130] > 17%!
	(MISSING)I0906 20:20:04.744835  721676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:20:04.750880  721676 command_runner.go:130] > 163G
	I0906 20:20:04.751336  721676 start.go:128] duration metric: createHost completed in 11.129956655s
	I0906 20:20:04.751359  721676 start.go:83] releasing machines lock for "multinode-782472", held for 11.130124655s
	I0906 20:20:04.751444  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472
	I0906 20:20:04.770473  721676 ssh_runner.go:195] Run: cat /version.json
	I0906 20:20:04.770529  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:04.770601  721676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:20:04.770653  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:04.791997  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:04.808787  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:05.021718  721676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 20:20:05.021827  721676 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1693218425-17145", "minikube_version": "v1.31.2", "commit": "20676dbfdaf9085e354365adb7c56448fb3dd7be"}
	I0906 20:20:05.021981  721676 ssh_runner.go:195] Run: systemctl --version
	I0906 20:20:05.028114  721676 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0906 20:20:05.028194  721676 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0906 20:20:05.028299  721676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:20:05.179071  721676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:20:05.185013  721676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0906 20:20:05.185089  721676 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0906 20:20:05.185112  721676 command_runner.go:130] > Device: 3ah/58d	Inode: 5449409     Links: 1
	I0906 20:20:05.185127  721676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:05.185135  721676 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0906 20:20:05.185141  721676 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0906 20:20:05.185148  721676 command_runner.go:130] > Change: 2023-09-06 19:57:06.408535289 +0000
	I0906 20:20:05.185153  721676 command_runner.go:130] >  Birth: 2023-09-06 19:57:06.408535289 +0000
	I0906 20:20:05.185421  721676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:20:05.211677  721676 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:20:05.211816  721676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:20:05.257197  721676 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0906 20:20:05.257242  721676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 20:20:05.257249  721676 start.go:466] detecting cgroup driver to use...
	I0906 20:20:05.257297  721676 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:20:05.257350  721676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:20:05.277094  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:20:05.292373  721676 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:20:05.292433  721676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:20:05.309076  721676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:20:05.327155  721676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:20:05.423594  721676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:20:05.538178  721676 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0906 20:20:05.538226  721676 docker.go:212] disabling docker service ...
	I0906 20:20:05.538317  721676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:20:05.561240  721676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:20:05.575619  721676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:20:05.670614  721676 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0906 20:20:05.670706  721676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:20:05.773310  721676 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0906 20:20:05.773403  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:20:05.788658  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:20:05.810696  721676 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0906 20:20:05.810729  721676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 20:20:05.810791  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:05.823996  721676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:20:05.824117  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:05.837334  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:05.850371  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:05.864254  721676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:20:05.876212  721676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:20:05.885750  721676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 20:20:05.887142  721676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:20:05.897950  721676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:20:05.990571  721676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:20:06.149014  721676 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:20:06.149133  721676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:20:06.154434  721676 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0906 20:20:06.154464  721676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 20:20:06.154472  721676 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0906 20:20:06.154500  721676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:06.154512  721676 command_runner.go:130] > Access: 2023-09-06 20:20:06.130583966 +0000
	I0906 20:20:06.154525  721676 command_runner.go:130] > Modify: 2023-09-06 20:20:06.130583966 +0000
	I0906 20:20:06.154535  721676 command_runner.go:130] > Change: 2023-09-06 20:20:06.130583966 +0000
	I0906 20:20:06.154540  721676 command_runner.go:130] >  Birth: -
	I0906 20:20:06.154561  721676 start.go:534] Will wait 60s for crictl version
	I0906 20:20:06.154641  721676 ssh_runner.go:195] Run: which crictl
	I0906 20:20:06.159459  721676 command_runner.go:130] > /usr/bin/crictl
	I0906 20:20:06.159577  721676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:20:06.205418  721676 command_runner.go:130] > Version:  0.1.0
	I0906 20:20:06.205447  721676 command_runner.go:130] > RuntimeName:  cri-o
	I0906 20:20:06.205481  721676 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0906 20:20:06.205502  721676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 20:20:06.208366  721676 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:20:06.208494  721676 ssh_runner.go:195] Run: crio --version
	I0906 20:20:06.252087  721676 command_runner.go:130] > crio version 1.24.6
	I0906 20:20:06.252106  721676 command_runner.go:130] > Version:          1.24.6
	I0906 20:20:06.252114  721676 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0906 20:20:06.252119  721676 command_runner.go:130] > GitTreeState:     clean
	I0906 20:20:06.252164  721676 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0906 20:20:06.252171  721676 command_runner.go:130] > GoVersion:        go1.18.2
	I0906 20:20:06.252176  721676 command_runner.go:130] > Compiler:         gc
	I0906 20:20:06.252185  721676 command_runner.go:130] > Platform:         linux/arm64
	I0906 20:20:06.252191  721676 command_runner.go:130] > Linkmode:         dynamic
	I0906 20:20:06.252217  721676 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0906 20:20:06.252229  721676 command_runner.go:130] > SeccompEnabled:   true
	I0906 20:20:06.252243  721676 command_runner.go:130] > AppArmorEnabled:  false
	I0906 20:20:06.254432  721676 ssh_runner.go:195] Run: crio --version
	I0906 20:20:06.298379  721676 command_runner.go:130] > crio version 1.24.6
	I0906 20:20:06.298398  721676 command_runner.go:130] > Version:          1.24.6
	I0906 20:20:06.298408  721676 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0906 20:20:06.298413  721676 command_runner.go:130] > GitTreeState:     clean
	I0906 20:20:06.298463  721676 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0906 20:20:06.298473  721676 command_runner.go:130] > GoVersion:        go1.18.2
	I0906 20:20:06.298478  721676 command_runner.go:130] > Compiler:         gc
	I0906 20:20:06.298495  721676 command_runner.go:130] > Platform:         linux/arm64
	I0906 20:20:06.298515  721676 command_runner.go:130] > Linkmode:         dynamic
	I0906 20:20:06.298533  721676 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0906 20:20:06.298556  721676 command_runner.go:130] > SeccompEnabled:   true
	I0906 20:20:06.298569  721676 command_runner.go:130] > AppArmorEnabled:  false
	I0906 20:20:06.304138  721676 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 20:20:06.307086  721676 cli_runner.go:164] Run: docker network inspect multinode-782472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:20:06.325454  721676 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0906 20:20:06.330413  721676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:20:06.344667  721676 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:20:06.344740  721676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:20:06.404871  721676 command_runner.go:130] > {
	I0906 20:20:06.404889  721676 command_runner.go:130] >   "images": [
	I0906 20:20:06.404894  721676 command_runner.go:130] >     {
	I0906 20:20:06.404904  721676 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0906 20:20:06.404910  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.404917  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0906 20:20:06.404922  721676 command_runner.go:130] >       ],
	I0906 20:20:06.404929  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.404942  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0906 20:20:06.404954  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0906 20:20:06.404959  721676 command_runner.go:130] >       ],
	I0906 20:20:06.404972  721676 command_runner.go:130] >       "size": "60881430",
	I0906 20:20:06.404979  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.404984  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.404993  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405002  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405010  721676 command_runner.go:130] >     },
	I0906 20:20:06.405014  721676 command_runner.go:130] >     {
	I0906 20:20:06.405022  721676 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0906 20:20:06.405027  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405033  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 20:20:06.405038  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405046  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405058  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0906 20:20:06.405070  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0906 20:20:06.405074  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405080  721676 command_runner.go:130] >       "size": "29037500",
	I0906 20:20:06.405088  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.405093  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405098  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405103  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405109  721676 command_runner.go:130] >     },
	I0906 20:20:06.405114  721676 command_runner.go:130] >     {
	I0906 20:20:06.405121  721676 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0906 20:20:06.405135  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405142  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0906 20:20:06.405146  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405153  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405163  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0906 20:20:06.405175  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0906 20:20:06.405180  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405186  721676 command_runner.go:130] >       "size": "51393451",
	I0906 20:20:06.405191  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.405201  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405206  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405212  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405218  721676 command_runner.go:130] >     },
	I0906 20:20:06.405223  721676 command_runner.go:130] >     {
	I0906 20:20:06.405233  721676 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0906 20:20:06.405241  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405247  721676 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0906 20:20:06.405254  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405259  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405268  721676 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0906 20:20:06.405279  721676 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0906 20:20:06.405287  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405295  721676 command_runner.go:130] >       "size": "182203183",
	I0906 20:20:06.405302  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.405307  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.405314  721676 command_runner.go:130] >       },
	I0906 20:20:06.405319  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405324  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405336  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405341  721676 command_runner.go:130] >     },
	I0906 20:20:06.405347  721676 command_runner.go:130] >     {
	I0906 20:20:06.405358  721676 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0906 20:20:06.405363  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405369  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0906 20:20:06.405376  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405381  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405394  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0906 20:20:06.405403  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0906 20:20:06.405411  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405416  721676 command_runner.go:130] >       "size": "120857550",
	I0906 20:20:06.405421  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.405429  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.405434  721676 command_runner.go:130] >       },
	I0906 20:20:06.405439  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405444  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405450  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405454  721676 command_runner.go:130] >     },
	I0906 20:20:06.405460  721676 command_runner.go:130] >     {
	I0906 20:20:06.405468  721676 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0906 20:20:06.405476  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405483  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0906 20:20:06.405487  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405496  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405505  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0906 20:20:06.405518  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0906 20:20:06.405523  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405528  721676 command_runner.go:130] >       "size": "117187378",
	I0906 20:20:06.405533  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.405538  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.405544  721676 command_runner.go:130] >       },
	I0906 20:20:06.405553  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405559  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405567  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405571  721676 command_runner.go:130] >     },
	I0906 20:20:06.405575  721676 command_runner.go:130] >     {
	I0906 20:20:06.405584  721676 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0906 20:20:06.405592  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405599  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0906 20:20:06.405604  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405609  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405618  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0906 20:20:06.405629  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0906 20:20:06.405636  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405642  721676 command_runner.go:130] >       "size": "69926807",
	I0906 20:20:06.405647  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.405652  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405657  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405665  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405669  721676 command_runner.go:130] >     },
	I0906 20:20:06.405673  721676 command_runner.go:130] >     {
	I0906 20:20:06.405684  721676 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0906 20:20:06.405689  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405696  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0906 20:20:06.405703  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405709  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405744  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0906 20:20:06.405760  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0906 20:20:06.405766  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405771  721676 command_runner.go:130] >       "size": "59188020",
	I0906 20:20:06.405776  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.405780  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.405787  721676 command_runner.go:130] >       },
	I0906 20:20:06.405792  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405802  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405809  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405813  721676 command_runner.go:130] >     },
	I0906 20:20:06.405817  721676 command_runner.go:130] >     {
	I0906 20:20:06.405825  721676 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0906 20:20:06.405832  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.405838  721676 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0906 20:20:06.405843  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405848  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.405861  721676 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0906 20:20:06.405870  721676 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0906 20:20:06.405884  721676 command_runner.go:130] >       ],
	I0906 20:20:06.405911  721676 command_runner.go:130] >       "size": "520014",
	I0906 20:20:06.405920  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.405925  721676 command_runner.go:130] >         "value": "65535"
	I0906 20:20:06.405929  721676 command_runner.go:130] >       },
	I0906 20:20:06.405937  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.405942  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.405948  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.405954  721676 command_runner.go:130] >     }
	I0906 20:20:06.405958  721676 command_runner.go:130] >   ]
	I0906 20:20:06.405964  721676 command_runner.go:130] > }
	I0906 20:20:06.408919  721676 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:20:06.408942  721676 crio.go:415] Images already preloaded, skipping extraction
	I0906 20:20:06.409032  721676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:20:06.451120  721676 command_runner.go:130] > {
	I0906 20:20:06.451141  721676 command_runner.go:130] >   "images": [
	I0906 20:20:06.451146  721676 command_runner.go:130] >     {
	I0906 20:20:06.451165  721676 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0906 20:20:06.451172  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451180  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0906 20:20:06.451184  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451189  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451203  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0906 20:20:06.451216  721676 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0906 20:20:06.451221  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451226  721676 command_runner.go:130] >       "size": "60881430",
	I0906 20:20:06.451233  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.451242  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451250  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451258  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451262  721676 command_runner.go:130] >     },
	I0906 20:20:06.451270  721676 command_runner.go:130] >     {
	I0906 20:20:06.451278  721676 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0906 20:20:06.451285  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451298  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 20:20:06.451302  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451308  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451323  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0906 20:20:06.451336  721676 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0906 20:20:06.451340  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451350  721676 command_runner.go:130] >       "size": "29037500",
	I0906 20:20:06.451357  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.451364  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451373  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451378  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451382  721676 command_runner.go:130] >     },
	I0906 20:20:06.451389  721676 command_runner.go:130] >     {
	I0906 20:20:06.451396  721676 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0906 20:20:06.451401  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451408  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0906 20:20:06.451412  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451422  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451434  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0906 20:20:06.451447  721676 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0906 20:20:06.451451  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451457  721676 command_runner.go:130] >       "size": "51393451",
	I0906 20:20:06.451464  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.451469  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451480  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451485  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451499  721676 command_runner.go:130] >     },
	I0906 20:20:06.451507  721676 command_runner.go:130] >     {
	I0906 20:20:06.451515  721676 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0906 20:20:06.451519  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451525  721676 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0906 20:20:06.451532  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451539  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451548  721676 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0906 20:20:06.451561  721676 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0906 20:20:06.451568  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451574  721676 command_runner.go:130] >       "size": "182203183",
	I0906 20:20:06.451579  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.451586  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.451591  721676 command_runner.go:130] >       },
	I0906 20:20:06.451598  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451603  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451610  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451614  721676 command_runner.go:130] >     },
	I0906 20:20:06.451619  721676 command_runner.go:130] >     {
	I0906 20:20:06.451627  721676 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0906 20:20:06.451639  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451645  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0906 20:20:06.451652  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451657  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451666  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0906 20:20:06.451678  721676 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0906 20:20:06.451682  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451689  721676 command_runner.go:130] >       "size": "120857550",
	I0906 20:20:06.451696  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.451701  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.451706  721676 command_runner.go:130] >       },
	I0906 20:20:06.451714  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451719  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451728  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451735  721676 command_runner.go:130] >     },
	I0906 20:20:06.451739  721676 command_runner.go:130] >     {
	I0906 20:20:06.451747  721676 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0906 20:20:06.451754  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451764  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0906 20:20:06.451774  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451779  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451788  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0906 20:20:06.451802  721676 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0906 20:20:06.451808  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451816  721676 command_runner.go:130] >       "size": "117187378",
	I0906 20:20:06.451820  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.451826  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.451836  721676 command_runner.go:130] >       },
	I0906 20:20:06.451841  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451854  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451859  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451864  721676 command_runner.go:130] >     },
	I0906 20:20:06.451871  721676 command_runner.go:130] >     {
	I0906 20:20:06.451879  721676 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0906 20:20:06.451887  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.451900  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0906 20:20:06.451907  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451912  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.451921  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0906 20:20:06.451936  721676 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0906 20:20:06.451943  721676 command_runner.go:130] >       ],
	I0906 20:20:06.451948  721676 command_runner.go:130] >       "size": "69926807",
	I0906 20:20:06.451953  721676 command_runner.go:130] >       "uid": null,
	I0906 20:20:06.451961  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.451966  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.451980  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.451989  721676 command_runner.go:130] >     },
	I0906 20:20:06.451994  721676 command_runner.go:130] >     {
	I0906 20:20:06.452003  721676 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0906 20:20:06.452015  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.452024  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0906 20:20:06.452029  721676 command_runner.go:130] >       ],
	I0906 20:20:06.452034  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.452058  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0906 20:20:06.452070  721676 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0906 20:20:06.452074  721676 command_runner.go:130] >       ],
	I0906 20:20:06.452079  721676 command_runner.go:130] >       "size": "59188020",
	I0906 20:20:06.452084  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.452093  721676 command_runner.go:130] >         "value": "0"
	I0906 20:20:06.452099  721676 command_runner.go:130] >       },
	I0906 20:20:06.452105  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.452113  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.452118  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.452122  721676 command_runner.go:130] >     },
	I0906 20:20:06.452129  721676 command_runner.go:130] >     {
	I0906 20:20:06.452141  721676 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0906 20:20:06.452153  721676 command_runner.go:130] >       "repoTags": [
	I0906 20:20:06.452162  721676 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0906 20:20:06.452166  721676 command_runner.go:130] >       ],
	I0906 20:20:06.452175  721676 command_runner.go:130] >       "repoDigests": [
	I0906 20:20:06.452189  721676 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0906 20:20:06.452201  721676 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0906 20:20:06.452205  721676 command_runner.go:130] >       ],
	I0906 20:20:06.452211  721676 command_runner.go:130] >       "size": "520014",
	I0906 20:20:06.452221  721676 command_runner.go:130] >       "uid": {
	I0906 20:20:06.452229  721676 command_runner.go:130] >         "value": "65535"
	I0906 20:20:06.452233  721676 command_runner.go:130] >       },
	I0906 20:20:06.452238  721676 command_runner.go:130] >       "username": "",
	I0906 20:20:06.452243  721676 command_runner.go:130] >       "spec": null,
	I0906 20:20:06.452248  721676 command_runner.go:130] >       "pinned": false
	I0906 20:20:06.452252  721676 command_runner.go:130] >     }
	I0906 20:20:06.452256  721676 command_runner.go:130] >   ]
	I0906 20:20:06.452267  721676 command_runner.go:130] > }
	I0906 20:20:06.452427  721676 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:20:06.452438  721676 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:20:06.452521  721676 ssh_runner.go:195] Run: crio config
	I0906 20:20:06.504146  721676 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0906 20:20:06.504173  721676 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0906 20:20:06.504182  721676 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0906 20:20:06.504187  721676 command_runner.go:130] > #
	I0906 20:20:06.504201  721676 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0906 20:20:06.504212  721676 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0906 20:20:06.504225  721676 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0906 20:20:06.504245  721676 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0906 20:20:06.504253  721676 command_runner.go:130] > # reload'.
	I0906 20:20:06.504261  721676 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0906 20:20:06.504269  721676 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0906 20:20:06.504281  721676 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0906 20:20:06.504288  721676 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0906 20:20:06.504298  721676 command_runner.go:130] > [crio]
	I0906 20:20:06.504305  721676 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0906 20:20:06.504315  721676 command_runner.go:130] > # containers images, in this directory.
	I0906 20:20:06.504323  721676 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0906 20:20:06.504332  721676 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0906 20:20:06.504554  721676 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0906 20:20:06.504570  721676 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0906 20:20:06.504582  721676 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0906 20:20:06.504589  721676 command_runner.go:130] > # storage_driver = "vfs"
	I0906 20:20:06.504596  721676 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0906 20:20:06.504604  721676 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0906 20:20:06.504827  721676 command_runner.go:130] > # storage_option = [
	I0906 20:20:06.504842  721676 command_runner.go:130] > # ]
	I0906 20:20:06.504850  721676 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0906 20:20:06.504858  721676 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0906 20:20:06.504864  721676 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0906 20:20:06.504873  721676 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0906 20:20:06.504881  721676 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0906 20:20:06.504890  721676 command_runner.go:130] > # always happen on a node reboot
	I0906 20:20:06.504896  721676 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0906 20:20:06.504908  721676 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0906 20:20:06.504916  721676 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0906 20:20:06.504930  721676 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0906 20:20:06.504937  721676 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0906 20:20:06.504947  721676 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0906 20:20:06.504961  721676 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0906 20:20:06.504966  721676 command_runner.go:130] > # internal_wipe = true
	I0906 20:20:06.504978  721676 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0906 20:20:06.504989  721676 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0906 20:20:06.504997  721676 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0906 20:20:06.505007  721676 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0906 20:20:06.505021  721676 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0906 20:20:06.505029  721676 command_runner.go:130] > [crio.api]
	I0906 20:20:06.505035  721676 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0906 20:20:06.505041  721676 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0906 20:20:06.505050  721676 command_runner.go:130] > # IP address on which the stream server will listen.
	I0906 20:20:06.505056  721676 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0906 20:20:06.505064  721676 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0906 20:20:06.505074  721676 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0906 20:20:06.505080  721676 command_runner.go:130] > # stream_port = "0"
	I0906 20:20:06.505087  721676 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0906 20:20:06.505096  721676 command_runner.go:130] > # stream_enable_tls = false
	I0906 20:20:06.505103  721676 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0906 20:20:06.505113  721676 command_runner.go:130] > # stream_idle_timeout = ""
	I0906 20:20:06.505121  721676 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0906 20:20:06.505131  721676 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0906 20:20:06.505138  721676 command_runner.go:130] > # minutes.
	I0906 20:20:06.505144  721676 command_runner.go:130] > # stream_tls_cert = ""
	I0906 20:20:06.505160  721676 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0906 20:20:06.505172  721676 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0906 20:20:06.505178  721676 command_runner.go:130] > # stream_tls_key = ""
	I0906 20:20:06.505186  721676 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0906 20:20:06.505197  721676 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0906 20:20:06.505204  721676 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0906 20:20:06.505218  721676 command_runner.go:130] > # stream_tls_ca = ""
	I0906 20:20:06.505227  721676 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0906 20:20:06.505233  721676 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0906 20:20:06.505242  721676 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0906 20:20:06.505251  721676 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0906 20:20:06.505282  721676 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0906 20:20:06.505295  721676 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0906 20:20:06.505300  721676 command_runner.go:130] > [crio.runtime]
	I0906 20:20:06.505307  721676 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0906 20:20:06.505316  721676 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0906 20:20:06.505322  721676 command_runner.go:130] > # "nofile=1024:2048"
	I0906 20:20:06.505329  721676 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0906 20:20:06.505341  721676 command_runner.go:130] > # default_ulimits = [
	I0906 20:20:06.505345  721676 command_runner.go:130] > # ]
	I0906 20:20:06.505352  721676 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0906 20:20:06.505362  721676 command_runner.go:130] > # no_pivot = false
	I0906 20:20:06.505369  721676 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0906 20:20:06.505377  721676 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0906 20:20:06.505387  721676 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0906 20:20:06.505394  721676 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0906 20:20:06.505401  721676 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0906 20:20:06.505411  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 20:20:06.505416  721676 command_runner.go:130] > # conmon = ""
	I0906 20:20:06.505422  721676 command_runner.go:130] > # Cgroup setting for conmon
	I0906 20:20:06.505432  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0906 20:20:06.505662  721676 command_runner.go:130] > conmon_cgroup = "pod"
	I0906 20:20:06.505679  721676 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0906 20:20:06.505686  721676 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0906 20:20:06.505695  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 20:20:06.505703  721676 command_runner.go:130] > # conmon_env = [
	I0906 20:20:06.505708  721676 command_runner.go:130] > # ]
	I0906 20:20:06.505715  721676 command_runner.go:130] > # Additional environment variables to set for all the
	I0906 20:20:06.505725  721676 command_runner.go:130] > # containers. These are overridden if set in the
	I0906 20:20:06.505733  721676 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0906 20:20:06.505741  721676 command_runner.go:130] > # default_env = [
	I0906 20:20:06.505745  721676 command_runner.go:130] > # ]
	I0906 20:20:06.505752  721676 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0906 20:20:06.505762  721676 command_runner.go:130] > # selinux = false
	I0906 20:20:06.505770  721676 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0906 20:20:06.505777  721676 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0906 20:20:06.505784  721676 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0906 20:20:06.505794  721676 command_runner.go:130] > # seccomp_profile = ""
	I0906 20:20:06.505801  721676 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0906 20:20:06.505811  721676 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0906 20:20:06.505822  721676 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0906 20:20:06.505828  721676 command_runner.go:130] > # which might increase security.
	I0906 20:20:06.505839  721676 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0906 20:20:06.505849  721676 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0906 20:20:06.505861  721676 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0906 20:20:06.505869  721676 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0906 20:20:06.505878  721676 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0906 20:20:06.505888  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:06.505902  721676 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0906 20:20:06.505918  721676 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0906 20:20:06.505928  721676 command_runner.go:130] > # the cgroup blockio controller.
	I0906 20:20:06.505934  721676 command_runner.go:130] > # blockio_config_file = ""
	I0906 20:20:06.505942  721676 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0906 20:20:06.505952  721676 command_runner.go:130] > # irqbalance daemon.
	I0906 20:20:06.505960  721676 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0906 20:20:06.505968  721676 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0906 20:20:06.505977  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:06.505983  721676 command_runner.go:130] > # rdt_config_file = ""
	I0906 20:20:06.505990  721676 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0906 20:20:06.506000  721676 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0906 20:20:06.506007  721676 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0906 20:20:06.506012  721676 command_runner.go:130] > # separate_pull_cgroup = ""
	I0906 20:20:06.506024  721676 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0906 20:20:06.506032  721676 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0906 20:20:06.506040  721676 command_runner.go:130] > # will be added.
	I0906 20:20:06.506062  721676 command_runner.go:130] > # default_capabilities = [
	I0906 20:20:06.506301  721676 command_runner.go:130] > # 	"CHOWN",
	I0906 20:20:06.506314  721676 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0906 20:20:06.506320  721676 command_runner.go:130] > # 	"FSETID",
	I0906 20:20:06.506324  721676 command_runner.go:130] > # 	"FOWNER",
	I0906 20:20:06.506329  721676 command_runner.go:130] > # 	"SETGID",
	I0906 20:20:06.506335  721676 command_runner.go:130] > # 	"SETUID",
	I0906 20:20:06.506343  721676 command_runner.go:130] > # 	"SETPCAP",
	I0906 20:20:06.506348  721676 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0906 20:20:06.506353  721676 command_runner.go:130] > # 	"KILL",
	I0906 20:20:06.506362  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506372  721676 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0906 20:20:06.506384  721676 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0906 20:20:06.506391  721676 command_runner.go:130] > # add_inheritable_capabilities = true
	I0906 20:20:06.506405  721676 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0906 20:20:06.506412  721676 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 20:20:06.506417  721676 command_runner.go:130] > # default_sysctls = [
	I0906 20:20:06.506422  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506428  721676 command_runner.go:130] > # List of devices on the host that a
	I0906 20:20:06.506436  721676 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0906 20:20:06.506442  721676 command_runner.go:130] > # allowed_devices = [
	I0906 20:20:06.506449  721676 command_runner.go:130] > # 	"/dev/fuse",
	I0906 20:20:06.506458  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506464  721676 command_runner.go:130] > # List of additional devices. specified as
	I0906 20:20:06.506487  721676 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0906 20:20:06.506498  721676 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0906 20:20:06.506506  721676 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 20:20:06.506513  721676 command_runner.go:130] > # additional_devices = [
	I0906 20:20:06.506518  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506525  721676 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0906 20:20:06.506533  721676 command_runner.go:130] > # cdi_spec_dirs = [
	I0906 20:20:06.506538  721676 command_runner.go:130] > # 	"/etc/cdi",
	I0906 20:20:06.506543  721676 command_runner.go:130] > # 	"/var/run/cdi",
	I0906 20:20:06.506551  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506560  721676 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0906 20:20:06.506572  721676 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0906 20:20:06.506577  721676 command_runner.go:130] > # Defaults to false.
	I0906 20:20:06.506583  721676 command_runner.go:130] > # device_ownership_from_security_context = false
	I0906 20:20:06.506591  721676 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0906 20:20:06.506599  721676 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0906 20:20:06.506608  721676 command_runner.go:130] > # hooks_dir = [
	I0906 20:20:06.506614  721676 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0906 20:20:06.506624  721676 command_runner.go:130] > # ]
	I0906 20:20:06.506632  721676 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0906 20:20:06.506647  721676 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0906 20:20:06.506653  721676 command_runner.go:130] > # its default mounts from the following two files:
	I0906 20:20:06.506661  721676 command_runner.go:130] > #
	I0906 20:20:06.506669  721676 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0906 20:20:06.506677  721676 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0906 20:20:06.506684  721676 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0906 20:20:06.506688  721676 command_runner.go:130] > #
	I0906 20:20:06.506695  721676 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0906 20:20:06.506706  721676 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0906 20:20:06.506714  721676 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0906 20:20:06.506724  721676 command_runner.go:130] > #      only add mounts it finds in this file.
	I0906 20:20:06.506729  721676 command_runner.go:130] > #
	I0906 20:20:06.506734  721676 command_runner.go:130] > # default_mounts_file = ""
	I0906 20:20:06.506746  721676 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0906 20:20:06.506754  721676 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0906 20:20:06.506759  721676 command_runner.go:130] > # pids_limit = 0
	I0906 20:20:06.506767  721676 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0906 20:20:06.506775  721676 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0906 20:20:06.506783  721676 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0906 20:20:06.506796  721676 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0906 20:20:06.506801  721676 command_runner.go:130] > # log_size_max = -1
	I0906 20:20:06.506815  721676 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0906 20:20:06.506822  721676 command_runner.go:130] > # log_to_journald = false
	I0906 20:20:06.506830  721676 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0906 20:20:06.507083  721676 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0906 20:20:06.507098  721676 command_runner.go:130] > # Path to directory for container attach sockets.
	I0906 20:20:06.507105  721676 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0906 20:20:06.507111  721676 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0906 20:20:06.507117  721676 command_runner.go:130] > # bind_mount_prefix = ""
	I0906 20:20:06.507123  721676 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0906 20:20:06.507129  721676 command_runner.go:130] > # read_only = false
	I0906 20:20:06.507142  721676 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0906 20:20:06.507150  721676 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0906 20:20:06.507159  721676 command_runner.go:130] > # live configuration reload.
	I0906 20:20:06.507164  721676 command_runner.go:130] > # log_level = "info"
	I0906 20:20:06.507171  721676 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0906 20:20:06.507181  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:06.507186  721676 command_runner.go:130] > # log_filter = ""
	I0906 20:20:06.507195  721676 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0906 20:20:06.507203  721676 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0906 20:20:06.507208  721676 command_runner.go:130] > # separated by comma.
	I0906 20:20:06.507213  721676 command_runner.go:130] > # uid_mappings = ""
	I0906 20:20:06.507225  721676 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0906 20:20:06.507232  721676 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0906 20:20:06.507241  721676 command_runner.go:130] > # separated by comma.
	I0906 20:20:06.507246  721676 command_runner.go:130] > # gid_mappings = ""
	I0906 20:20:06.507268  721676 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0906 20:20:06.507279  721676 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 20:20:06.507287  721676 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 20:20:06.507292  721676 command_runner.go:130] > # minimum_mappable_uid = -1
	I0906 20:20:06.507300  721676 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0906 20:20:06.507312  721676 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 20:20:06.507319  721676 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 20:20:06.507328  721676 command_runner.go:130] > # minimum_mappable_gid = -1
	I0906 20:20:06.507336  721676 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0906 20:20:06.507346  721676 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0906 20:20:06.507353  721676 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0906 20:20:06.507358  721676 command_runner.go:130] > # ctr_stop_timeout = 30
	I0906 20:20:06.507365  721676 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0906 20:20:06.507372  721676 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0906 20:20:06.507385  721676 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0906 20:20:06.507391  721676 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0906 20:20:06.507400  721676 command_runner.go:130] > # drop_infra_ctr = true
	I0906 20:20:06.507407  721676 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0906 20:20:06.507414  721676 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0906 20:20:06.507427  721676 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0906 20:20:06.507433  721676 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0906 20:20:06.507440  721676 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0906 20:20:06.507447  721676 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0906 20:20:06.507452  721676 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0906 20:20:06.507461  721676 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0906 20:20:06.507469  721676 command_runner.go:130] > # pinns_path = ""
	I0906 20:20:06.507477  721676 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 20:20:06.507490  721676 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0906 20:20:06.507498  721676 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0906 20:20:06.507507  721676 command_runner.go:130] > # default_runtime = "runc"
	I0906 20:20:06.507513  721676 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0906 20:20:06.507522  721676 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0906 20:20:06.507533  721676 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0906 20:20:06.507544  721676 command_runner.go:130] > # creation as a file is not desired either.
	I0906 20:20:06.507554  721676 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0906 20:20:06.507563  721676 command_runner.go:130] > # the hostname is being managed dynamically.
	I0906 20:20:06.507569  721676 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0906 20:20:06.507578  721676 command_runner.go:130] > # ]
	I0906 20:20:06.507586  721676 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0906 20:20:06.507596  721676 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0906 20:20:06.507604  721676 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0906 20:20:06.507611  721676 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0906 20:20:06.507615  721676 command_runner.go:130] > #
	I0906 20:20:06.507626  721676 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0906 20:20:06.507632  721676 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0906 20:20:06.507643  721676 command_runner.go:130] > #  runtime_type = "oci"
	I0906 20:20:06.507649  721676 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0906 20:20:06.507656  721676 command_runner.go:130] > #  privileged_without_host_devices = false
	I0906 20:20:06.507664  721676 command_runner.go:130] > #  allowed_annotations = []
	I0906 20:20:06.507669  721676 command_runner.go:130] > # Where:
	I0906 20:20:06.507675  721676 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0906 20:20:06.507685  721676 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0906 20:20:06.507708  721676 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0906 20:20:06.507721  721676 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0906 20:20:06.507726  721676 command_runner.go:130] > #   in $PATH.
	I0906 20:20:06.507740  721676 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0906 20:20:06.507747  721676 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0906 20:20:06.507755  721676 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0906 20:20:06.507759  721676 command_runner.go:130] > #   state.
	I0906 20:20:06.507767  721676 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0906 20:20:06.507775  721676 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0906 20:20:06.507787  721676 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0906 20:20:06.507794  721676 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0906 20:20:06.507805  721676 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0906 20:20:06.507813  721676 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0906 20:20:06.507824  721676 command_runner.go:130] > #   The currently recognized values are:
	I0906 20:20:06.507832  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0906 20:20:06.507841  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0906 20:20:06.507848  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0906 20:20:06.507856  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0906 20:20:06.507868  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0906 20:20:06.507876  721676 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0906 20:20:06.507887  721676 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0906 20:20:06.507896  721676 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0906 20:20:06.507906  721676 command_runner.go:130] > #   should be moved to the container's cgroup
	I0906 20:20:06.507911  721676 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0906 20:20:06.507917  721676 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0906 20:20:06.508203  721676 command_runner.go:130] > runtime_type = "oci"
	I0906 20:20:06.508233  721676 command_runner.go:130] > runtime_root = "/run/runc"
	I0906 20:20:06.508238  721676 command_runner.go:130] > runtime_config_path = ""
	I0906 20:20:06.508243  721676 command_runner.go:130] > monitor_path = ""
	I0906 20:20:06.508248  721676 command_runner.go:130] > monitor_cgroup = ""
	I0906 20:20:06.508253  721676 command_runner.go:130] > monitor_exec_cgroup = ""
	I0906 20:20:06.508291  721676 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0906 20:20:06.508304  721676 command_runner.go:130] > # running containers
	I0906 20:20:06.508309  721676 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0906 20:20:06.508317  721676 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0906 20:20:06.508331  721676 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0906 20:20:06.508338  721676 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0906 20:20:06.508344  721676 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0906 20:20:06.508350  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0906 20:20:06.508355  721676 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0906 20:20:06.508361  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0906 20:20:06.508374  721676 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0906 20:20:06.508380  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0906 20:20:06.508388  721676 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0906 20:20:06.508399  721676 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0906 20:20:06.508406  721676 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0906 20:20:06.508418  721676 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0906 20:20:06.508428  721676 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0906 20:20:06.508435  721676 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0906 20:20:06.508449  721676 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0906 20:20:06.508461  721676 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0906 20:20:06.508468  721676 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0906 20:20:06.508479  721676 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0906 20:20:06.508487  721676 command_runner.go:130] > # Example:
	I0906 20:20:06.508497  721676 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0906 20:20:06.508503  721676 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0906 20:20:06.508509  721676 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0906 20:20:06.508516  721676 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0906 20:20:06.508520  721676 command_runner.go:130] > # cpuset = 0
	I0906 20:20:06.508525  721676 command_runner.go:130] > # cpushares = "0-1"
	I0906 20:20:06.508531  721676 command_runner.go:130] > # Where:
	I0906 20:20:06.508536  721676 command_runner.go:130] > # The workload name is workload-type.
	I0906 20:20:06.508547  721676 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0906 20:20:06.508554  721676 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0906 20:20:06.508561  721676 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0906 20:20:06.508574  721676 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0906 20:20:06.508581  721676 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0906 20:20:06.508585  721676 command_runner.go:130] > # 
	I0906 20:20:06.508670  721676 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0906 20:20:06.508683  721676 command_runner.go:130] > #
	I0906 20:20:06.508695  721676 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0906 20:20:06.508719  721676 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0906 20:20:06.508735  721676 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0906 20:20:06.508743  721676 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0906 20:20:06.508753  721676 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0906 20:20:06.508759  721676 command_runner.go:130] > [crio.image]
	I0906 20:20:06.508766  721676 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0906 20:20:06.508771  721676 command_runner.go:130] > # default_transport = "docker://"
	I0906 20:20:06.508781  721676 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0906 20:20:06.508792  721676 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0906 20:20:06.508797  721676 command_runner.go:130] > # global_auth_file = ""
	I0906 20:20:06.508803  721676 command_runner.go:130] > # The image used to instantiate infra containers.
	I0906 20:20:06.508815  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:06.508821  721676 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0906 20:20:06.508831  721676 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0906 20:20:06.508839  721676 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0906 20:20:06.508845  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:06.508851  721676 command_runner.go:130] > # pause_image_auth_file = ""
	I0906 20:20:06.508858  721676 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0906 20:20:06.508868  721676 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0906 20:20:06.508876  721676 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0906 20:20:06.508885  721676 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0906 20:20:06.508894  721676 command_runner.go:130] > # pause_command = "/pause"
	I0906 20:20:06.508901  721676 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0906 20:20:06.508911  721676 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0906 20:20:06.508919  721676 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0906 20:20:06.508926  721676 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0906 20:20:06.508933  721676 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0906 20:20:06.508939  721676 command_runner.go:130] > # signature_policy = ""
	I0906 20:20:06.508948  721676 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0906 20:20:06.508958  721676 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0906 20:20:06.508966  721676 command_runner.go:130] > # changing them here.
	I0906 20:20:06.508971  721676 command_runner.go:130] > # insecure_registries = [
	I0906 20:20:06.508975  721676 command_runner.go:130] > # ]
	I0906 20:20:06.508983  721676 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0906 20:20:06.508992  721676 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0906 20:20:06.509001  721676 command_runner.go:130] > # image_volumes = "mkdir"
	I0906 20:20:06.509010  721676 command_runner.go:130] > # Temporary directory to use for storing big files
	I0906 20:20:06.509015  721676 command_runner.go:130] > # big_files_temporary_dir = ""
	I0906 20:20:06.509023  721676 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0906 20:20:06.509030  721676 command_runner.go:130] > # CNI plugins.
	I0906 20:20:06.509035  721676 command_runner.go:130] > [crio.network]
	I0906 20:20:06.509042  721676 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0906 20:20:06.509050  721676 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0906 20:20:06.509100  721676 command_runner.go:130] > # cni_default_network = ""
	I0906 20:20:06.509108  721676 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0906 20:20:06.509114  721676 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0906 20:20:06.509120  721676 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0906 20:20:06.509125  721676 command_runner.go:130] > # plugin_dirs = [
	I0906 20:20:06.509132  721676 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0906 20:20:06.509136  721676 command_runner.go:130] > # ]
	I0906 20:20:06.509152  721676 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0906 20:20:06.509159  721676 command_runner.go:130] > [crio.metrics]
	I0906 20:20:06.509195  721676 command_runner.go:130] > # Globally enable or disable metrics support.
	I0906 20:20:06.509539  721676 command_runner.go:130] > # enable_metrics = false
	I0906 20:20:06.509564  721676 command_runner.go:130] > # Specify enabled metrics collectors.
	I0906 20:20:06.509571  721676 command_runner.go:130] > # Per default all metrics are enabled.
	I0906 20:20:06.509579  721676 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0906 20:20:06.509605  721676 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0906 20:20:06.509614  721676 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0906 20:20:06.509619  721676 command_runner.go:130] > # metrics_collectors = [
	I0906 20:20:06.509623  721676 command_runner.go:130] > # 	"operations",
	I0906 20:20:06.509632  721676 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0906 20:20:06.509638  721676 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0906 20:20:06.509645  721676 command_runner.go:130] > # 	"operations_errors",
	I0906 20:20:06.509653  721676 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0906 20:20:06.509658  721676 command_runner.go:130] > # 	"image_pulls_by_name",
	I0906 20:20:06.509666  721676 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0906 20:20:06.509672  721676 command_runner.go:130] > # 	"image_pulls_failures",
	I0906 20:20:06.509679  721676 command_runner.go:130] > # 	"image_pulls_successes",
	I0906 20:20:06.509684  721676 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0906 20:20:06.509689  721676 command_runner.go:130] > # 	"image_layer_reuse",
	I0906 20:20:06.509694  721676 command_runner.go:130] > # 	"containers_oom_total",
	I0906 20:20:06.509703  721676 command_runner.go:130] > # 	"containers_oom",
	I0906 20:20:06.509710  721676 command_runner.go:130] > # 	"processes_defunct",
	I0906 20:20:06.509716  721676 command_runner.go:130] > # 	"operations_total",
	I0906 20:20:06.509730  721676 command_runner.go:130] > # 	"operations_latency_seconds",
	I0906 20:20:06.509736  721676 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0906 20:20:06.509741  721676 command_runner.go:130] > # 	"operations_errors_total",
	I0906 20:20:06.509749  721676 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0906 20:20:06.509755  721676 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0906 20:20:06.509762  721676 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0906 20:20:06.509767  721676 command_runner.go:130] > # 	"image_pulls_success_total",
	I0906 20:20:06.509772  721676 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0906 20:20:06.509778  721676 command_runner.go:130] > # 	"containers_oom_count_total",
	I0906 20:20:06.509794  721676 command_runner.go:130] > # ]
	I0906 20:20:06.509805  721676 command_runner.go:130] > # The port on which the metrics server will listen.
	I0906 20:20:06.509811  721676 command_runner.go:130] > # metrics_port = 9090
	I0906 20:20:06.509820  721676 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0906 20:20:06.509825  721676 command_runner.go:130] > # metrics_socket = ""
	I0906 20:20:06.509831  721676 command_runner.go:130] > # The certificate for the secure metrics server.
	I0906 20:20:06.509843  721676 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0906 20:20:06.509851  721676 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0906 20:20:06.509857  721676 command_runner.go:130] > # certificate on any modification event.
	I0906 20:20:06.509861  721676 command_runner.go:130] > # metrics_cert = ""
	I0906 20:20:06.509867  721676 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0906 20:20:06.509875  721676 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0906 20:20:06.509880  721676 command_runner.go:130] > # metrics_key = ""
	I0906 20:20:06.509897  721676 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0906 20:20:06.509902  721676 command_runner.go:130] > [crio.tracing]
	I0906 20:20:06.509911  721676 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0906 20:20:06.509916  721676 command_runner.go:130] > # enable_tracing = false
	I0906 20:20:06.509923  721676 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0906 20:20:06.509928  721676 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0906 20:20:06.509935  721676 command_runner.go:130] > # Number of samples to collect per million spans.
	I0906 20:20:06.509940  721676 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0906 20:20:06.509950  721676 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0906 20:20:06.509955  721676 command_runner.go:130] > [crio.stats]
	I0906 20:20:06.509961  721676 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0906 20:20:06.509973  721676 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0906 20:20:06.510186  721676 command_runner.go:130] > # stats_collection_period = 0
	I0906 20:20:06.511968  721676 command_runner.go:130] ! time="2023-09-06 20:20:06.501665882Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0906 20:20:06.511993  721676 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0906 20:20:06.512070  721676 cni.go:84] Creating CNI manager for ""
	I0906 20:20:06.512083  721676 cni.go:136] 1 nodes found, recommending kindnet
	I0906 20:20:06.512113  721676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:20:06.512136  721676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-782472 NodeName:multinode-782472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:20:06.512286  721676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-782472"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:20:06.512360  721676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-782472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:20:06.512430  721676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 20:20:06.522590  721676 command_runner.go:130] > kubeadm
	I0906 20:20:06.522609  721676 command_runner.go:130] > kubectl
	I0906 20:20:06.522614  721676 command_runner.go:130] > kubelet
	I0906 20:20:06.523914  721676 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:20:06.523991  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:20:06.535323  721676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0906 20:20:06.558512  721676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:20:06.580899  721676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0906 20:20:06.602582  721676 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:20:06.607078  721676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:20:06.620707  721676 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472 for IP: 192.168.58.2
	I0906 20:20:06.620739  721676 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:06.620918  721676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:20:06.620965  721676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:20:06.621018  721676 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key
	I0906 20:20:06.621034  721676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt with IP's: []
	I0906 20:20:07.027048  721676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt ...
	I0906 20:20:07.027079  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt: {Name:mkc0aa1c4bb418d37a7abd2fdb13971ab6b9411b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.027287  721676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key ...
	I0906 20:20:07.027300  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key: {Name:mk6c95d3d826bbff630c9b0d3320f2c29e9633b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.027392  721676 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key.cee25041
	I0906 20:20:07.027407  721676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 20:20:07.483904  721676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt.cee25041 ...
	I0906 20:20:07.483939  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt.cee25041: {Name:mk209914f9f6be047aded894706b56907834bcf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.484139  721676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key.cee25041 ...
	I0906 20:20:07.484152  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key.cee25041: {Name:mk37c5c0afaad1d648adf348a0a68edca8aba406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.484239  721676 certs.go:337] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt
	I0906 20:20:07.484313  721676 certs.go:341] copying /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key
	I0906 20:20:07.484372  721676 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.key
	I0906 20:20:07.484384  721676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.crt with IP's: []
	I0906 20:20:07.740456  721676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.crt ...
	I0906 20:20:07.740485  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.crt: {Name:mkd6be519ad55a719c1e9e830cb0be6c5f3bb84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.740671  721676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.key ...
	I0906 20:20:07.740685  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.key: {Name:mkc9e612f7227d40447368e155645f28142021e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:07.740765  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 20:20:07.740786  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 20:20:07.740798  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 20:20:07.740809  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 20:20:07.740824  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 20:20:07.740835  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 20:20:07.740850  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 20:20:07.740863  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 20:20:07.740924  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:20:07.740963  721676 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:20:07.740978  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:20:07.741006  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:20:07.741034  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:20:07.741067  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:20:07.741117  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:20:07.741147  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /usr/share/ca-certificates/6579002.pem
	I0906 20:20:07.741181  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:07.741195  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem -> /usr/share/ca-certificates/657900.pem
	I0906 20:20:07.741897  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 20:20:07.771707  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:20:07.803815  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:20:07.833787  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:20:07.863080  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:20:07.892810  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:20:07.921782  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:20:07.950755  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:20:07.979712  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:20:08.015900  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:20:08.046907  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:20:08.078119  721676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:20:08.099953  721676 ssh_runner.go:195] Run: openssl version
	I0906 20:20:08.106857  721676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0906 20:20:08.107255  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:20:08.119454  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:20:08.124087  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:20:08.124120  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:20:08.124177  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:20:08.132741  721676 command_runner.go:130] > 3ec20f2e
	I0906 20:20:08.133119  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:20:08.145557  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:20:08.157719  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:08.163278  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:08.163315  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:08.163369  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:08.172969  721676 command_runner.go:130] > b5213941
	I0906 20:20:08.173343  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:20:08.185477  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:20:08.197337  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:20:08.202194  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:20:08.202242  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:20:08.202302  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:20:08.210808  721676 command_runner.go:130] > 51391683
	I0906 20:20:08.211214  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:20:08.223143  721676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:20:08.228068  721676 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 20:20:08.228123  721676 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 20:20:08.228165  721676 kubeadm.go:404] StartCluster: {Name:multinode-782472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:20:08.228261  721676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:20:08.228325  721676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:20:08.271482  721676 cri.go:89] found id: ""
	I0906 20:20:08.271550  721676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:20:08.282450  721676 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0906 20:20:08.282474  721676 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0906 20:20:08.282483  721676 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0906 20:20:08.282571  721676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:20:08.293699  721676 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0906 20:20:08.293773  721676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:20:08.303813  721676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0906 20:20:08.303837  721676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0906 20:20:08.303847  721676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0906 20:20:08.305230  721676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:20:08.305277  721676 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:20:08.305312  721676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 20:20:08.413195  721676 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0906 20:20:08.413264  721676 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0906 20:20:08.501966  721676 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:20:08.501998  721676 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:20:24.246455  721676 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 20:20:24.246487  721676 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0906 20:20:24.246527  721676 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 20:20:24.246537  721676 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 20:20:24.246618  721676 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0906 20:20:24.246628  721676 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0906 20:20:24.246678  721676 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0906 20:20:24.246690  721676 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0906 20:20:24.246722  721676 kubeadm.go:322] OS: Linux
	I0906 20:20:24.246731  721676 command_runner.go:130] > OS: Linux
	I0906 20:20:24.246773  721676 kubeadm.go:322] CGROUPS_CPU: enabled
	I0906 20:20:24.246782  721676 command_runner.go:130] > CGROUPS_CPU: enabled
	I0906 20:20:24.246835  721676 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0906 20:20:24.246845  721676 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0906 20:20:24.246889  721676 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0906 20:20:24.246898  721676 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0906 20:20:24.246943  721676 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0906 20:20:24.246951  721676 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0906 20:20:24.246996  721676 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0906 20:20:24.247005  721676 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0906 20:20:24.247049  721676 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0906 20:20:24.247058  721676 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0906 20:20:24.247100  721676 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0906 20:20:24.247108  721676 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0906 20:20:24.247152  721676 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0906 20:20:24.247165  721676 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0906 20:20:24.247209  721676 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0906 20:20:24.247218  721676 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0906 20:20:24.247289  721676 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:20:24.247298  721676 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:20:24.247385  721676 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:20:24.247392  721676 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:20:24.247477  721676 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:20:24.247485  721676 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:20:24.247542  721676 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:20:24.250586  721676 out.go:204]   - Generating certificates and keys ...
	I0906 20:20:24.247673  721676 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:20:24.250678  721676 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 20:20:24.250690  721676 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 20:20:24.250747  721676 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 20:20:24.250752  721676 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 20:20:24.250813  721676 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 20:20:24.250819  721676 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 20:20:24.250871  721676 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 20:20:24.250876  721676 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0906 20:20:24.250931  721676 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 20:20:24.250935  721676 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0906 20:20:24.250981  721676 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 20:20:24.250987  721676 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0906 20:20:24.251036  721676 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 20:20:24.251042  721676 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0906 20:20:24.251153  721676 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-782472] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0906 20:20:24.251158  721676 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-782472] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0906 20:20:24.251206  721676 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 20:20:24.251213  721676 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0906 20:20:24.251322  721676 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-782472] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0906 20:20:24.251327  721676 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-782472] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0906 20:20:24.251387  721676 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 20:20:24.251391  721676 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 20:20:24.251449  721676 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 20:20:24.251454  721676 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 20:20:24.251495  721676 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 20:20:24.251500  721676 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0906 20:20:24.251551  721676 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:20:24.251556  721676 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:20:24.251603  721676 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:20:24.251607  721676 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:20:24.251656  721676 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:20:24.251661  721676 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:20:24.251719  721676 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:20:24.251724  721676 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:20:24.251774  721676 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:20:24.251783  721676 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:20:24.251858  721676 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:20:24.251863  721676 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:20:24.251924  721676 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:20:24.253947  721676 out.go:204]   - Booting up control plane ...
	I0906 20:20:24.252029  721676 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:20:24.254192  721676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:20:24.254229  721676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:20:24.254339  721676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:20:24.254348  721676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:20:24.254415  721676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:20:24.254419  721676 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:20:24.254523  721676 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:20:24.254527  721676 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:20:24.254611  721676 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:20:24.254616  721676 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:20:24.254656  721676 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 20:20:24.254659  721676 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 20:20:24.254813  721676 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:20:24.254817  721676 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:20:24.254893  721676 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502298 seconds
	I0906 20:20:24.254897  721676 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502298 seconds
	I0906 20:20:24.255002  721676 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:20:24.255006  721676 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:20:24.255130  721676 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:20:24.255135  721676 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:20:24.255193  721676 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:20:24.255197  721676 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:20:24.255389  721676 command_runner.go:130] > [mark-control-plane] Marking the node multinode-782472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:20:24.255394  721676 kubeadm.go:322] [mark-control-plane] Marking the node multinode-782472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:20:24.255450  721676 command_runner.go:130] > [bootstrap-token] Using token: krsmlo.4qh2ury7dtivzqff
	I0906 20:20:24.255454  721676 kubeadm.go:322] [bootstrap-token] Using token: krsmlo.4qh2ury7dtivzqff
	I0906 20:20:24.257373  721676 out.go:204]   - Configuring RBAC rules ...
	I0906 20:20:24.257488  721676 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:20:24.257496  721676 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:20:24.257580  721676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:20:24.257585  721676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:20:24.257726  721676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:20:24.257730  721676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:20:24.257857  721676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:20:24.257861  721676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:20:24.257985  721676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:20:24.257990  721676 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:20:24.258181  721676 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:20:24.258187  721676 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:20:24.258301  721676 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:20:24.258305  721676 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:20:24.258349  721676 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 20:20:24.258352  721676 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 20:20:24.258398  721676 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 20:20:24.258402  721676 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 20:20:24.258406  721676 kubeadm.go:322] 
	I0906 20:20:24.258467  721676 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0906 20:20:24.258471  721676 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 20:20:24.258475  721676 kubeadm.go:322] 
	I0906 20:20:24.258552  721676 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0906 20:20:24.258556  721676 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 20:20:24.258560  721676 kubeadm.go:322] 
	I0906 20:20:24.258586  721676 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0906 20:20:24.258589  721676 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 20:20:24.258649  721676 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:20:24.258653  721676 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:20:24.258703  721676 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:20:24.258707  721676 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:20:24.258711  721676 kubeadm.go:322] 
	I0906 20:20:24.258765  721676 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0906 20:20:24.258769  721676 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 20:20:24.258777  721676 kubeadm.go:322] 
	I0906 20:20:24.258828  721676 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:20:24.258832  721676 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:20:24.258837  721676 kubeadm.go:322] 
	I0906 20:20:24.258890  721676 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0906 20:20:24.258894  721676 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 20:20:24.258969  721676 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:20:24.258973  721676 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:20:24.259042  721676 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:20:24.259046  721676 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:20:24.259050  721676 kubeadm.go:322] 
	I0906 20:20:24.259134  721676 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:20:24.259138  721676 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:20:24.259215  721676 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0906 20:20:24.259219  721676 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 20:20:24.259223  721676 kubeadm.go:322] 
	I0906 20:20:24.259307  721676 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token krsmlo.4qh2ury7dtivzqff \
	I0906 20:20:24.259311  721676 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token krsmlo.4qh2ury7dtivzqff \
	I0906 20:20:24.259415  721676 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a \
	I0906 20:20:24.259418  721676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a \
	I0906 20:20:24.259439  721676 command_runner.go:130] > 	--control-plane 
	I0906 20:20:24.259443  721676 kubeadm.go:322] 	--control-plane 
	I0906 20:20:24.259447  721676 kubeadm.go:322] 
	I0906 20:20:24.259533  721676 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:20:24.259537  721676 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:20:24.259541  721676 kubeadm.go:322] 
	I0906 20:20:24.259624  721676 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token krsmlo.4qh2ury7dtivzqff \
	I0906 20:20:24.259628  721676 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token krsmlo.4qh2ury7dtivzqff \
	I0906 20:20:24.259730  721676 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a 
	I0906 20:20:24.259742  721676 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a 
	I0906 20:20:24.259755  721676 cni.go:84] Creating CNI manager for ""
	I0906 20:20:24.259767  721676 cni.go:136] 1 nodes found, recommending kindnet
	I0906 20:20:24.261587  721676 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 20:20:24.263515  721676 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:20:24.279441  721676 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 20:20:24.279467  721676 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0906 20:20:24.279475  721676 command_runner.go:130] > Device: 3ah/58d	Inode: 5453116     Links: 1
	I0906 20:20:24.279483  721676 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:24.279489  721676 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0906 20:20:24.279495  721676 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0906 20:20:24.279501  721676 command_runner.go:130] > Change: 2023-09-06 19:57:07.056534413 +0000
	I0906 20:20:24.279516  721676 command_runner.go:130] >  Birth: 2023-09-06 19:57:07.016534467 +0000
	I0906 20:20:24.280200  721676 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 20:20:24.280219  721676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:20:24.306682  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:20:25.174777  721676 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0906 20:20:25.181359  721676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0906 20:20:25.191740  721676 command_runner.go:130] > serviceaccount/kindnet created
	I0906 20:20:25.205413  721676 command_runner.go:130] > daemonset.apps/kindnet created
	I0906 20:20:25.210999  721676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:20:25.211095  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:25.211121  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=multinode-782472 minikube.k8s.io/updated_at=2023_09_06T20_20_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:25.354402  721676 command_runner.go:130] > node/multinode-782472 labeled
	I0906 20:20:25.358697  721676 command_runner.go:130] > -16
	I0906 20:20:25.359904  721676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0906 20:20:25.363835  721676 ops.go:34] apiserver oom_adj: -16
	I0906 20:20:25.363931  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:25.501020  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:25.501115  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:25.593239  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:26.094014  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:26.191795  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:26.593391  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:26.690750  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:27.094415  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:27.194006  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:27.593533  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:27.688756  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:28.094379  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:28.189831  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:28.593496  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:28.690802  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:29.094176  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:29.186464  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:29.594316  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:29.685363  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:30.094086  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:30.219899  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:30.594270  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:30.692283  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:31.093534  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:31.195585  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:31.594034  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:31.686295  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:32.094028  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:32.181834  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:32.594341  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:32.685096  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:33.094374  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:33.206892  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:33.593455  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:33.704347  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:34.094060  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:34.205179  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:34.593440  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:34.687045  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:35.094417  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:35.194751  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:35.594416  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:35.694234  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:36.093571  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:36.198757  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:36.594181  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:36.697889  721676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0906 20:20:37.093461  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:20:37.211095  721676 command_runner.go:130] > NAME      SECRETS   AGE
	I0906 20:20:37.211113  721676 command_runner.go:130] > default   0         1s
	I0906 20:20:37.215098  721676 kubeadm.go:1081] duration metric: took 12.004080822s to wait for elevateKubeSystemPrivileges.
	I0906 20:20:37.215124  721676 kubeadm.go:406] StartCluster complete in 28.986962089s
	I0906 20:20:37.215140  721676 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:37.215198  721676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:37.215885  721676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:37.216402  721676 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:37.216704  721676 kapi.go:59] client config for multinode-782472: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:20:37.217897  721676 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 20:20:37.217908  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.217917  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.217924  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.218498  721676 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:20:37.218553  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 20:20:37.218727  721676 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 20:20:37.218711  721676 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 20:20:37.218796  721676 addons.go:69] Setting default-storageclass=true in profile "multinode-782472"
	I0906 20:20:37.218826  721676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-782472"
	I0906 20:20:37.218796  721676 addons.go:69] Setting storage-provisioner=true in profile "multinode-782472"
	I0906 20:20:37.218907  721676 addons.go:231] Setting addon storage-provisioner=true in "multinode-782472"
	I0906 20:20:37.218964  721676 host.go:66] Checking if "multinode-782472" exists ...
	I0906 20:20:37.219148  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:20:37.219374  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:20:37.246298  721676 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0906 20:20:37.246382  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.246406  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.246453  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.246480  721676 round_trippers.go:580]     Content-Length: 291
	I0906 20:20:37.246501  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.246545  721676 round_trippers.go:580]     Audit-Id: c902e648-69d1-4f55-a035-b411af5405d7
	I0906 20:20:37.246573  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.246595  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.246662  721676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88a10611-e857-48eb-b81e-bdcb9cbcce00","resourceVersion":"346","creationTimestamp":"2023-09-06T20:20:24Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0906 20:20:37.247263  721676 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88a10611-e857-48eb-b81e-bdcb9cbcce00","resourceVersion":"346","creationTimestamp":"2023-09-06T20:20:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0906 20:20:37.247396  721676 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 20:20:37.247550  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.247980  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.248023  721676 round_trippers.go:473]     Content-Type: application/json
	I0906 20:20:37.248070  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.258261  721676 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0906 20:20:37.258287  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.258297  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.258304  721676 round_trippers.go:580]     Content-Length: 291
	I0906 20:20:37.258316  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.258323  721676 round_trippers.go:580]     Audit-Id: e839d0e9-6b7c-4ee3-8efc-73fdea9f5216
	I0906 20:20:37.258334  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.258340  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.258348  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.258375  721676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88a10611-e857-48eb-b81e-bdcb9cbcce00","resourceVersion":"347","creationTimestamp":"2023-09-06T20:20:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0906 20:20:37.258527  721676 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 20:20:37.258540  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.258548  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.258559  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.261871  721676 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:37.262166  721676 kapi.go:59] client config for multinode-782472: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:20:37.262494  721676 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0906 20:20:37.262501  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.262522  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.262530  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.273796  721676 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0906 20:20:37.273816  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.273825  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.273831  721676 round_trippers.go:580]     Content-Length: 291
	I0906 20:20:37.273838  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.273844  721676 round_trippers.go:580]     Audit-Id: 36c65e36-b58c-47cb-b67f-472bf96ada8f
	I0906 20:20:37.273851  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.273857  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.273887  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.273912  721676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88a10611-e857-48eb-b81e-bdcb9cbcce00","resourceVersion":"347","creationTimestamp":"2023-09-06T20:20:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0906 20:20:37.273999  721676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-782472" context rescaled to 1 replicas
	I0906 20:20:37.274022  721676 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:20:37.279641  721676 out.go:177] * Verifying Kubernetes components...
	I0906 20:20:37.281496  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:20:37.280179  721676 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0906 20:20:37.281899  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.281910  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.281917  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.281924  721676 round_trippers.go:580]     Content-Length: 109
	I0906 20:20:37.281931  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.281938  721676 round_trippers.go:580]     Audit-Id: aabbbf6d-b2e7-4563-b090-a40cef599dec
	I0906 20:20:37.281945  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.281951  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.281974  721676 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"347"},"items":[]}
	I0906 20:20:37.282372  721676 addons.go:231] Setting addon default-storageclass=true in "multinode-782472"
	I0906 20:20:37.282408  721676 host.go:66] Checking if "multinode-782472" exists ...
	I0906 20:20:37.282932  721676 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:20:37.289256  721676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:20:37.291511  721676 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:20:37.291536  721676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:20:37.291604  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:37.326995  721676 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:20:37.327021  721676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:20:37.327088  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:37.347620  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:37.369319  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:37.455140  721676 command_runner.go:130] > apiVersion: v1
	I0906 20:20:37.455205  721676 command_runner.go:130] > data:
	I0906 20:20:37.455225  721676 command_runner.go:130] >   Corefile: |
	I0906 20:20:37.455244  721676 command_runner.go:130] >     .:53 {
	I0906 20:20:37.455278  721676 command_runner.go:130] >         errors
	I0906 20:20:37.455298  721676 command_runner.go:130] >         health {
	I0906 20:20:37.455316  721676 command_runner.go:130] >            lameduck 5s
	I0906 20:20:37.455334  721676 command_runner.go:130] >         }
	I0906 20:20:37.455367  721676 command_runner.go:130] >         ready
	I0906 20:20:37.455392  721676 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0906 20:20:37.455415  721676 command_runner.go:130] >            pods insecure
	I0906 20:20:37.455452  721676 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0906 20:20:37.455479  721676 command_runner.go:130] >            ttl 30
	I0906 20:20:37.455500  721676 command_runner.go:130] >         }
	I0906 20:20:37.455535  721676 command_runner.go:130] >         prometheus :9153
	I0906 20:20:37.455560  721676 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0906 20:20:37.455581  721676 command_runner.go:130] >            max_concurrent 1000
	I0906 20:20:37.455617  721676 command_runner.go:130] >         }
	I0906 20:20:37.455639  721676 command_runner.go:130] >         cache 30
	I0906 20:20:37.455657  721676 command_runner.go:130] >         loop
	I0906 20:20:37.455677  721676 command_runner.go:130] >         reload
	I0906 20:20:37.455706  721676 command_runner.go:130] >         loadbalance
	I0906 20:20:37.455729  721676 command_runner.go:130] >     }
	I0906 20:20:37.455756  721676 command_runner.go:130] > kind: ConfigMap
	I0906 20:20:37.455794  721676 command_runner.go:130] > metadata:
	I0906 20:20:37.455834  721676 command_runner.go:130] >   creationTimestamp: "2023-09-06T20:20:24Z"
	I0906 20:20:37.455854  721676 command_runner.go:130] >   name: coredns
	I0906 20:20:37.455884  721676 command_runner.go:130] >   namespace: kube-system
	I0906 20:20:37.455908  721676 command_runner.go:130] >   resourceVersion: "223"
	I0906 20:20:37.455928  721676 command_runner.go:130] >   uid: 0bcd1617-175d-42f4-95c0-ef44c8d7520a
	I0906 20:20:37.456107  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 20:20:37.456405  721676 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:37.456647  721676 kapi.go:59] client config for multinode-782472: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:20:37.456906  721676 node_ready.go:35] waiting up to 6m0s for node "multinode-782472" to be "Ready" ...
	I0906 20:20:37.456987  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:37.456993  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.457002  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.457008  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.461801  721676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 20:20:37.461874  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.461898  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.461925  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.461947  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.461969  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.461991  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.462014  721676 round_trippers.go:580]     Audit-Id: ce705d17-df20-4abd-822a-f746a91c3a4a
	I0906 20:20:37.463332  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"305","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0906 20:20:37.464016  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:37.464025  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.464034  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.464040  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.484078  721676 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0906 20:20:37.484137  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.484158  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.484184  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.484207  721676 round_trippers.go:580]     Audit-Id: 557e8b0f-834e-441a-9d11-8d8ddd5a8be9
	I0906 20:20:37.484229  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.484253  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.484283  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.485854  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"305","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0906 20:20:37.530462  721676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:20:37.614946  721676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:20:37.985439  721676 command_runner.go:130] > configmap/coredns replaced
	I0906 20:20:37.986709  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:37.986757  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:37.986783  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:37.986806  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:37.989504  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:37.989551  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:37.989573  721676 round_trippers.go:580]     Audit-Id: 859ed0b9-026a-48e9-986e-82cfcdb88ee5
	I0906 20:20:37.989603  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:37.989627  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:37.989651  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:37.989676  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:37.989700  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:37 GMT
	I0906 20:20:37.990191  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"305","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0906 20:20:37.992153  721676 start.go:907] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0906 20:20:37.992236  721676 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0906 20:20:38.139978  721676 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0906 20:20:38.151170  721676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0906 20:20:38.165467  721676 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0906 20:20:38.191549  721676 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0906 20:20:38.215540  721676 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0906 20:20:38.235193  721676 command_runner.go:130] > pod/storage-provisioner created
	I0906 20:20:38.249343  721676 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 20:20:38.250643  721676 addons.go:502] enable addons completed in 1.031928576s: enabled=[default-storageclass storage-provisioner]
	I0906 20:20:38.487503  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:38.487529  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:38.487544  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:38.487552  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:38.491557  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:38.491652  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:38.491674  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:38.491693  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:38.491729  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:38 GMT
	I0906 20:20:38.491754  721676 round_trippers.go:580]     Audit-Id: 827b6a74-7c83-45ed-afbe-76779bd0827b
	I0906 20:20:38.491778  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:38.491811  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:38.491987  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"305","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0906 20:20:38.987266  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:38.987290  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:38.987301  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:38.987310  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:38.989623  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:38.989647  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:38.989656  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:38.989663  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:38.989669  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:38.989676  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:38 GMT
	I0906 20:20:38.989683  721676 round_trippers.go:580]     Audit-Id: 20429967-fe4c-46a2-9cb1-b7087198fba4
	I0906 20:20:38.989692  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:38.989942  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"305","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0906 20:20:39.487222  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:39.487249  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.487259  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.487267  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.489971  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:39.490031  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.490076  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.490108  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.490131  721676 round_trippers.go:580]     Audit-Id: 3c3c8077-9cb6-4831-867e-6e6e19ca2df6
	I0906 20:20:39.490153  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.490184  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.490199  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.490361  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:39.490754  721676 node_ready.go:49] node "multinode-782472" has status "Ready":"True"
	I0906 20:20:39.490771  721676 node_ready.go:38] duration metric: took 2.033847652s waiting for node "multinode-782472" to be "Ready" ...
	I0906 20:20:39.490780  721676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:20:39.490847  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:20:39.490858  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.490866  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.490873  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.494413  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:39.494438  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.494449  721676 round_trippers.go:580]     Audit-Id: 68ad2585-cbbc-456d-ae6b-98cd2fb04f7c
	I0906 20:20:39.494456  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.494463  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.494470  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.494480  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.494490  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.495302  721676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"387"},"items":[{"metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"381","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0906 20:20:39.499314  721676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79759" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:39.499406  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-79759
	I0906 20:20:39.499423  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.499433  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.499440  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.502298  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:39.502335  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.502345  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.502352  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.502359  721676 round_trippers.go:580]     Audit-Id: 2b9fd012-b42a-40f1-b4dd-1ff9b65be0f9
	I0906 20:20:39.502365  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.502372  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.502379  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.502803  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"381","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0906 20:20:39.503331  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:39.503348  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.503357  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.503365  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.505958  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:39.505979  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.505988  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.505995  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.506002  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.506008  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.506016  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.506027  721676 round_trippers.go:580]     Audit-Id: 5aaa9bf7-4ebe-4849-8f17-a0a6b69c5cc0
	I0906 20:20:39.506320  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:39.506780  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-79759
	I0906 20:20:39.506795  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.506805  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.506816  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.509265  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:39.509290  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.509300  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.509308  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.509336  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.509351  721676 round_trippers.go:580]     Audit-Id: c87b73d4-8ae2-4c01-bf27-8bc8c77bdbb6
	I0906 20:20:39.509358  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.509365  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.509544  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"381","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0906 20:20:39.510119  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:39.510136  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:39.510146  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:39.510154  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:39.512604  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:39.512624  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:39.512633  721676 round_trippers.go:580]     Audit-Id: 80c96656-d800-42fe-821b-a892469196a3
	I0906 20:20:39.512640  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:39.512646  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:39.512653  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:39.512660  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:39.512666  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:39 GMT
	I0906 20:20:39.512826  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.014297  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-79759
	I0906 20:20:40.014323  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.014334  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.014341  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.017837  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:40.017939  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.017963  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.017993  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.018021  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.018075  721676 round_trippers.go:580]     Audit-Id: 5c0352a2-ee49-4c2f-ba7a-fb6bdadbcc03
	I0906 20:20:40.018100  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.018121  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.018341  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"381","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0906 20:20:40.018965  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.018981  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.018991  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.018999  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.022109  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:40.022138  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.022149  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.022156  721676 round_trippers.go:580]     Audit-Id: 09a51c4b-e4cd-46b5-98ea-0c96ab801614
	I0906 20:20:40.022162  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.022170  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.022177  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.022184  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.022616  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.513517  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-79759
	I0906 20:20:40.513542  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.513552  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.513560  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.516268  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.516370  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.516393  721676 round_trippers.go:580]     Audit-Id: a8b31b7f-d631-4b77-8ef2-3ae9a0d96b50
	I0906 20:20:40.516463  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.516471  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.516478  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.516506  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.516521  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.516647  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"399","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0906 20:20:40.517202  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.517221  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.517230  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.517239  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.519749  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.519801  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.519830  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.519839  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.519846  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.519862  721676 round_trippers.go:580]     Audit-Id: b49ecb9f-fd83-4f9f-8743-9da82a5a610e
	I0906 20:20:40.519870  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.519876  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.520288  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.520981  721676 pod_ready.go:92] pod "coredns-5dd5756b68-79759" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:40.521019  721676 pod_ready.go:81] duration metric: took 1.021675437s waiting for pod "coredns-5dd5756b68-79759" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.521079  721676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.521189  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-782472
	I0906 20:20:40.521208  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.521245  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.521272  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.525227  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:40.525249  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.525265  721676 round_trippers.go:580]     Audit-Id: 85730d5f-0954-463b-86d8-2d21804aecb4
	I0906 20:20:40.525272  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.525279  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.525286  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.525293  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.525300  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.525554  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-782472","namespace":"kube-system","uid":"c7fbee74-f36a-435f-b4eb-9e01833854a3","resourceVersion":"290","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"dfaa62571a1327eee1c536a3243dc8f3","kubernetes.io/config.mirror":"dfaa62571a1327eee1c536a3243dc8f3","kubernetes.io/config.seen":"2023-09-06T20:20:24.185455730Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0906 20:20:40.526393  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.526411  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.526421  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.526429  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.529210  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.529228  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.529246  721676 round_trippers.go:580]     Audit-Id: 35d7a074-45d0-4d47-a2a3-aaded105c5c4
	I0906 20:20:40.529253  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.529260  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.529267  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.529274  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.529280  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.529384  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.529766  721676 pod_ready.go:92] pod "etcd-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:40.529776  721676 pod_ready.go:81] duration metric: took 8.675992ms waiting for pod "etcd-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.529790  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.529847  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-782472
	I0906 20:20:40.529851  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.529868  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.529876  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.532341  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.532391  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.532413  721676 round_trippers.go:580]     Audit-Id: 34a5a231-d4bb-49b4-af04-c67f7d0c6eb2
	I0906 20:20:40.532437  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.532474  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.532488  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.532495  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.532501  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.532646  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-782472","namespace":"kube-system","uid":"8d109f5d-3d07-4d57-bb86-5144199cf5e8","resourceVersion":"260","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d8cb9b6609c14b0204af7167dd8050e9","kubernetes.io/config.mirror":"d8cb9b6609c14b0204af7167dd8050e9","kubernetes.io/config.seen":"2023-09-06T20:20:24.185460022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0906 20:20:40.533180  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.533196  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.533204  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.533211  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.535473  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.535499  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.535509  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.535516  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.535522  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.535529  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.535536  721676 round_trippers.go:580]     Audit-Id: 461283a7-1cd8-4d88-8958-5905fe04c6c3
	I0906 20:20:40.535543  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.535818  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.536208  721676 pod_ready.go:92] pod "kube-apiserver-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:40.536224  721676 pod_ready.go:81] duration metric: took 6.42774ms waiting for pod "kube-apiserver-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.536235  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.536292  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-782472
	I0906 20:20:40.536303  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.536312  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.536319  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.538706  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.538734  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.538742  721676 round_trippers.go:580]     Audit-Id: 19b9b765-ac6b-403f-93bb-2030d1904a84
	I0906 20:20:40.538749  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.538768  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.538781  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.538788  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.538812  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.539024  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-782472","namespace":"kube-system","uid":"67462036-1f86-4cd8-8872-e0f7c61eec13","resourceVersion":"263","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fae94c75f3d99d8053cd41b1188a79cb","kubernetes.io/config.mirror":"fae94c75f3d99d8053cd41b1188a79cb","kubernetes.io/config.seen":"2023-09-06T20:20:24.185461507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0906 20:20:40.687945  721676 request.go:629] Waited for 148.363461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.688048  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:40.688064  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.688074  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.688089  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.690705  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.690727  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.690744  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.690752  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.690764  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.690772  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.690778  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.690789  721676 round_trippers.go:580]     Audit-Id: 8225e376-9c92-4bdc-a64b-3af842883a82
	I0906 20:20:40.690956  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:40.691344  721676 pod_ready.go:92] pod "kube-controller-manager-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:40.691366  721676 pod_ready.go:81] duration metric: took 155.118381ms waiting for pod "kube-controller-manager-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.691379  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lhjnq" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:40.887843  721676 request.go:629] Waited for 196.373082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhjnq
	I0906 20:20:40.887904  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhjnq
	I0906 20:20:40.887910  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:40.887924  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:40.887935  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:40.890694  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:40.890757  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:40.890779  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:40 GMT
	I0906 20:20:40.890802  721676 round_trippers.go:580]     Audit-Id: 83e3d189-2c5d-428d-98ea-a6a15ad06ca9
	I0906 20:20:40.890838  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:40.890892  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:40.890907  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:40.890914  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:40.891055  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lhjnq","generateName":"kube-proxy-","namespace":"kube-system","uid":"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff","resourceVersion":"385","creationTimestamp":"2023-09-06T20:20:36Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1ecde341-d8a6-4231-a369-8815db31017a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ecde341-d8a6-4231-a369-8815db31017a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0906 20:20:41.087934  721676 request.go:629] Waited for 196.36971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:41.088011  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:41.088039  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.088054  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.088062  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.090504  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:41.090563  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.090584  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.090606  721676 round_trippers.go:580]     Audit-Id: 880f3cfd-c424-4794-a3d1-62fc2773d2a9
	I0906 20:20:41.090621  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.090642  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.090656  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.090663  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.091004  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:41.091438  721676 pod_ready.go:92] pod "kube-proxy-lhjnq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:41.091457  721676 pod_ready.go:81] duration metric: took 400.06731ms waiting for pod "kube-proxy-lhjnq" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:41.091468  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:41.287897  721676 request.go:629] Waited for 196.361037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-782472
	I0906 20:20:41.287988  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-782472
	I0906 20:20:41.288017  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.288048  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.288067  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.290614  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:41.290633  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.290642  721676 round_trippers.go:580]     Audit-Id: 224ae9b1-e57f-460c-9e48-fd35d161df58
	I0906 20:20:41.290649  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.290655  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.290662  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.290668  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.290679  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.290786  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-782472","namespace":"kube-system","uid":"8841f830-f4c4-4cac-8265-3da8e1d4c90c","resourceVersion":"281","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0a0873e0992ce9209a5f971960d459b8","kubernetes.io/config.mirror":"0a0873e0992ce9209a5f971960d459b8","kubernetes.io/config.seen":"2023-09-06T20:20:24.185462417Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0906 20:20:41.487593  721676 request.go:629] Waited for 196.360922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:41.487658  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:20:41.487672  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.487693  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.487702  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.490481  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:41.490546  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.490563  721676 round_trippers.go:580]     Audit-Id: 008b77c9-4f73-4631-b452-d06e69289cb9
	I0906 20:20:41.490570  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.490578  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.490584  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.490591  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.490600  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.490746  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:20:41.491143  721676 pod_ready.go:92] pod "kube-scheduler-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:20:41.491160  721676 pod_ready.go:81] duration metric: took 399.681153ms waiting for pod "kube-scheduler-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:20:41.491173  721676 pod_ready.go:38] duration metric: took 2.000378728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:20:41.491191  721676 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:20:41.491246  721676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:20:41.502958  721676 command_runner.go:130] > 1273
	I0906 20:20:41.504446  721676 api_server.go:72] duration metric: took 4.230397092s to wait for apiserver process to appear ...
	I0906 20:20:41.504469  721676 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:20:41.504486  721676 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0906 20:20:41.513434  721676 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0906 20:20:41.513534  721676 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0906 20:20:41.513547  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.513558  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.513573  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.514836  721676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 20:20:41.514901  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.514916  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.514934  721676 round_trippers.go:580]     Content-Length: 263
	I0906 20:20:41.514944  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.514954  721676 round_trippers.go:580]     Audit-Id: 9afa0fbb-1c7b-4005-affd-66e794ff3814
	I0906 20:20:41.514964  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.514971  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.514978  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.515009  721676 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0906 20:20:41.515090  721676 api_server.go:141] control plane version: v1.28.1
	I0906 20:20:41.515107  721676 api_server.go:131] duration metric: took 10.633093ms to wait for apiserver health ...
	I0906 20:20:41.515116  721676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:20:41.687544  721676 request.go:629] Waited for 172.360834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:20:41.687674  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:20:41.687687  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.687701  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.687709  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.691935  721676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 20:20:41.692008  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.692031  721676 round_trippers.go:580]     Audit-Id: 8344561c-20fb-4d79-98eb-6b2089cb901d
	I0906 20:20:41.692064  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.692102  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.692123  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.692146  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.692194  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.693193  721676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"399","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0906 20:20:41.695848  721676 system_pods.go:59] 8 kube-system pods found
	I0906 20:20:41.695891  721676 system_pods.go:61] "coredns-5dd5756b68-79759" [b492a232-9d20-4012-8a94-0ff7eca50db6] Running
	I0906 20:20:41.695898  721676 system_pods.go:61] "etcd-multinode-782472" [c7fbee74-f36a-435f-b4eb-9e01833854a3] Running
	I0906 20:20:41.695904  721676 system_pods.go:61] "kindnet-whw4s" [92a15983-5281-4989-b838-0b61276da955] Running
	I0906 20:20:41.695914  721676 system_pods.go:61] "kube-apiserver-multinode-782472" [8d109f5d-3d07-4d57-bb86-5144199cf5e8] Running
	I0906 20:20:41.695925  721676 system_pods.go:61] "kube-controller-manager-multinode-782472" [67462036-1f86-4cd8-8872-e0f7c61eec13] Running
	I0906 20:20:41.695933  721676 system_pods.go:61] "kube-proxy-lhjnq" [2eb21731-931d-41b6-a6d8-da9bb0d0d3ff] Running
	I0906 20:20:41.695944  721676 system_pods.go:61] "kube-scheduler-multinode-782472" [8841f830-f4c4-4cac-8265-3da8e1d4c90c] Running
	I0906 20:20:41.695951  721676 system_pods.go:61] "storage-provisioner" [7d968ea2-93d4-4741-8b34-e531ffe5a253] Running
	I0906 20:20:41.695961  721676 system_pods.go:74] duration metric: took 180.839567ms to wait for pod list to return data ...
	I0906 20:20:41.695978  721676 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:20:41.887309  721676 request.go:629] Waited for 191.249241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0906 20:20:41.887386  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0906 20:20:41.887392  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:41.887405  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:41.887413  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:41.890622  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:41.890648  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:41.890665  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:41 GMT
	I0906 20:20:41.890672  721676 round_trippers.go:580]     Audit-Id: 5b7c189e-e61f-4b6f-b923-7f7348daaf88
	I0906 20:20:41.890679  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:41.890686  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:41.890692  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:41.890699  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:41.890706  721676 round_trippers.go:580]     Content-Length: 261
	I0906 20:20:41.890729  721676 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"39b652ff-58d0-472f-8d14-070b3016f5fb","resourceVersion":"306","creationTimestamp":"2023-09-06T20:20:36Z"}}]}
	I0906 20:20:41.890997  721676 default_sa.go:45] found service account: "default"
	I0906 20:20:41.891017  721676 default_sa.go:55] duration metric: took 195.035066ms for default service account to be created ...
	I0906 20:20:41.891025  721676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:20:42.087388  721676 request.go:629] Waited for 196.288923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:20:42.087493  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:20:42.087506  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:42.087516  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:42.087527  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:42.091769  721676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 20:20:42.091860  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:42.091886  721676 round_trippers.go:580]     Audit-Id: 8e140723-3088-43d5-8c59-937772404d42
	I0906 20:20:42.091899  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:42.091908  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:42.091916  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:42.091924  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:42.091931  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:42 GMT
	I0906 20:20:42.092932  721676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"399","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0906 20:20:42.095574  721676 system_pods.go:86] 8 kube-system pods found
	I0906 20:20:42.095612  721676 system_pods.go:89] "coredns-5dd5756b68-79759" [b492a232-9d20-4012-8a94-0ff7eca50db6] Running
	I0906 20:20:42.095620  721676 system_pods.go:89] "etcd-multinode-782472" [c7fbee74-f36a-435f-b4eb-9e01833854a3] Running
	I0906 20:20:42.095626  721676 system_pods.go:89] "kindnet-whw4s" [92a15983-5281-4989-b838-0b61276da955] Running
	I0906 20:20:42.095637  721676 system_pods.go:89] "kube-apiserver-multinode-782472" [8d109f5d-3d07-4d57-bb86-5144199cf5e8] Running
	I0906 20:20:42.095643  721676 system_pods.go:89] "kube-controller-manager-multinode-782472" [67462036-1f86-4cd8-8872-e0f7c61eec13] Running
	I0906 20:20:42.095649  721676 system_pods.go:89] "kube-proxy-lhjnq" [2eb21731-931d-41b6-a6d8-da9bb0d0d3ff] Running
	I0906 20:20:42.095656  721676 system_pods.go:89] "kube-scheduler-multinode-782472" [8841f830-f4c4-4cac-8265-3da8e1d4c90c] Running
	I0906 20:20:42.095667  721676 system_pods.go:89] "storage-provisioner" [7d968ea2-93d4-4741-8b34-e531ffe5a253] Running
	I0906 20:20:42.095675  721676 system_pods.go:126] duration metric: took 204.645193ms to wait for k8s-apps to be running ...
	I0906 20:20:42.095695  721676 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:20:42.095817  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:20:42.112570  721676 system_svc.go:56] duration metric: took 16.862458ms WaitForService to wait for kubelet.
	I0906 20:20:42.112601  721676 kubeadm.go:581] duration metric: took 4.838556221s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:20:42.112627  721676 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:20:42.288088  721676 request.go:629] Waited for 175.371635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0906 20:20:42.288185  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0906 20:20:42.288196  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:42.288206  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:42.288214  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:42.291023  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:42.291048  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:42.291057  721676 round_trippers.go:580]     Audit-Id: a8ce6bee-8c06-4064-b44e-706429047e83
	I0906 20:20:42.291064  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:42.291071  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:42.291078  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:42.291087  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:42.291094  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:42 GMT
	I0906 20:20:42.291248  721676 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0906 20:20:42.291738  721676 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:20:42.291760  721676 node_conditions.go:123] node cpu capacity is 2
	I0906 20:20:42.291774  721676 node_conditions.go:105] duration metric: took 179.140402ms to run NodePressure ...
	I0906 20:20:42.291791  721676 start.go:228] waiting for startup goroutines ...
	I0906 20:20:42.291797  721676 start.go:233] waiting for cluster config update ...
	I0906 20:20:42.291812  721676 start.go:242] writing updated cluster config ...
	I0906 20:20:42.295096  721676 out.go:177] 
	I0906 20:20:42.297339  721676 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:20:42.297456  721676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json ...
	I0906 20:20:42.299684  721676 out.go:177] * Starting worker node multinode-782472-m02 in cluster multinode-782472
	I0906 20:20:42.301655  721676 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:20:42.303615  721676 out.go:177] * Pulling base image ...
	I0906 20:20:42.306150  721676 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:20:42.306187  721676 cache.go:57] Caching tarball of preloaded images
	I0906 20:20:42.306235  721676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:20:42.306283  721676 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 20:20:42.306300  721676 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 20:20:42.306426  721676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json ...
	I0906 20:20:42.324678  721676 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:20:42.324706  721676 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:20:42.324732  721676 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:20:42.324768  721676 start.go:365] acquiring machines lock for multinode-782472-m02: {Name:mk29660f4e498a8fc28a67670fbdaae145e763ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:20:42.324912  721676 start.go:369] acquired machines lock for "multinode-782472-m02" in 125.784µs
	I0906 20:20:42.324943  721676 start.go:93] Provisioning new machine with config: &{Name:multinode-782472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0906 20:20:42.325040  721676 start.go:125] createHost starting for "m02" (driver="docker")
	I0906 20:20:42.327731  721676 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 20:20:42.327864  721676 start.go:159] libmachine.API.Create for "multinode-782472" (driver="docker")
	I0906 20:20:42.327889  721676 client.go:168] LocalClient.Create starting
	I0906 20:20:42.327976  721676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem
	I0906 20:20:42.328009  721676 main.go:141] libmachine: Decoding PEM data...
	I0906 20:20:42.328025  721676 main.go:141] libmachine: Parsing certificate...
	I0906 20:20:42.328098  721676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem
	I0906 20:20:42.328118  721676 main.go:141] libmachine: Decoding PEM data...
	I0906 20:20:42.328129  721676 main.go:141] libmachine: Parsing certificate...
	I0906 20:20:42.328380  721676 cli_runner.go:164] Run: docker network inspect multinode-782472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:20:42.346938  721676 network_create.go:76] Found existing network {name:multinode-782472 subnet:0x40011e5050 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0906 20:20:42.346994  721676 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-782472-m02" container
	I0906 20:20:42.347078  721676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 20:20:42.369485  721676 cli_runner.go:164] Run: docker volume create multinode-782472-m02 --label name.minikube.sigs.k8s.io=multinode-782472-m02 --label created_by.minikube.sigs.k8s.io=true
	I0906 20:20:42.389312  721676 oci.go:103] Successfully created a docker volume multinode-782472-m02
	I0906 20:20:42.389411  721676 cli_runner.go:164] Run: docker run --rm --name multinode-782472-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-782472-m02 --entrypoint /usr/bin/test -v multinode-782472-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0906 20:20:42.950852  721676 oci.go:107] Successfully prepared a docker volume multinode-782472-m02
	I0906 20:20:42.950900  721676 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:20:42.950927  721676 kic.go:190] Starting extracting preloaded images to volume ...
	I0906 20:20:42.951045  721676 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-782472-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 20:20:47.256052  721676 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-782472-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.304962072s)
	I0906 20:20:47.256083  721676 kic.go:199] duration metric: took 4.305153 seconds to extract preloaded images to volume
	W0906 20:20:47.256243  721676 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 20:20:47.256364  721676 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 20:20:47.328284  721676 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-782472-m02 --name multinode-782472-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-782472-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-782472-m02 --network multinode-782472 --ip 192.168.58.3 --volume multinode-782472-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0906 20:20:47.702003  721676 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Running}}
	I0906 20:20:47.726364  721676 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Status}}
	I0906 20:20:47.751066  721676 cli_runner.go:164] Run: docker exec multinode-782472-m02 stat /var/lib/dpkg/alternatives/iptables
	I0906 20:20:47.834076  721676 oci.go:144] the created container "multinode-782472-m02" has a running status.
	I0906 20:20:47.834103  721676 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa...
	I0906 20:20:48.264284  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0906 20:20:48.264338  721676 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 20:20:48.292442  721676 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Status}}
	I0906 20:20:48.324699  721676 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 20:20:48.324719  721676 kic_runner.go:114] Args: [docker exec --privileged multinode-782472-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 20:20:48.410618  721676 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Status}}
	I0906 20:20:48.437927  721676 machine.go:88] provisioning docker machine ...
	I0906 20:20:48.437957  721676 ubuntu.go:169] provisioning hostname "multinode-782472-m02"
	I0906 20:20:48.438021  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:48.461022  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:20:48.461799  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0906 20:20:48.461819  721676 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-782472-m02 && echo "multinode-782472-m02" | sudo tee /etc/hostname
	I0906 20:20:48.462831  721676 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0906 20:20:51.617975  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-782472-m02
	
	I0906 20:20:51.618125  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:51.638627  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:20:51.639071  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0906 20:20:51.639094  721676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-782472-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-782472-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-782472-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:20:51.779943  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:20:51.779969  721676 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:20:51.779987  721676 ubuntu.go:177] setting up certificates
	I0906 20:20:51.780001  721676 provision.go:83] configureAuth start
	I0906 20:20:51.780062  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472-m02
	I0906 20:20:51.798878  721676 provision.go:138] copyHostCerts
	I0906 20:20:51.798924  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:20:51.798966  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:20:51.798977  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:20:51.799073  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:20:51.799152  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:20:51.799177  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:20:51.799185  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:20:51.799226  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:20:51.799303  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:20:51.799325  721676 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:20:51.799340  721676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:20:51.799376  721676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:20:51.799445  721676 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.multinode-782472-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-782472-m02]
	I0906 20:20:52.506945  721676 provision.go:172] copyRemoteCerts
	I0906 20:20:52.507023  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:20:52.507078  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:52.526638  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:20:52.629248  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 20:20:52.629308  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:20:52.659829  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 20:20:52.659896  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0906 20:20:52.693079  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 20:20:52.693167  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:20:52.723422  721676 provision.go:86] duration metric: configureAuth took 943.406776ms
	I0906 20:20:52.723451  721676 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:20:52.723646  721676 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:20:52.723765  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:52.745551  721676 main.go:141] libmachine: Using SSH client type: native
	I0906 20:20:52.746078  721676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0906 20:20:52.746100  721676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:20:53.018518  721676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:20:53.018548  721676 machine.go:91] provisioned docker machine in 4.580602422s
	I0906 20:20:53.018558  721676 client.go:171] LocalClient.Create took 10.690663202s
	I0906 20:20:53.018572  721676 start.go:167] duration metric: libmachine.API.Create for "multinode-782472" took 10.690711317s
	I0906 20:20:53.018580  721676 start.go:300] post-start starting for "multinode-782472-m02" (driver="docker")
	I0906 20:20:53.018588  721676 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:20:53.018658  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:20:53.018707  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:53.039106  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:20:53.142026  721676 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:20:53.146470  721676 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0906 20:20:53.146492  721676 command_runner.go:130] > NAME="Ubuntu"
	I0906 20:20:53.146499  721676 command_runner.go:130] > VERSION_ID="22.04"
	I0906 20:20:53.146505  721676 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0906 20:20:53.146511  721676 command_runner.go:130] > VERSION_CODENAME=jammy
	I0906 20:20:53.146515  721676 command_runner.go:130] > ID=ubuntu
	I0906 20:20:53.146520  721676 command_runner.go:130] > ID_LIKE=debian
	I0906 20:20:53.146526  721676 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0906 20:20:53.146531  721676 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0906 20:20:53.146539  721676 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0906 20:20:53.146552  721676 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0906 20:20:53.146561  721676 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0906 20:20:53.146601  721676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:20:53.146629  721676 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:20:53.146644  721676 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:20:53.146661  721676 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:20:53.146673  721676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:20:53.146748  721676 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:20:53.146833  721676 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:20:53.146844  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /etc/ssl/certs/6579002.pem
	I0906 20:20:53.146942  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:20:53.157969  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:20:53.187024  721676 start.go:303] post-start completed in 168.430509ms
	I0906 20:20:53.187453  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472-m02
	I0906 20:20:53.206028  721676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/config.json ...
	I0906 20:20:53.206350  721676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:20:53.206395  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:53.224608  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:20:53.320741  721676 command_runner.go:130] > 17%!
	(MISSING)I0906 20:20:53.320825  721676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:20:53.327225  721676 command_runner.go:130] > 162G
	I0906 20:20:53.327256  721676 start.go:128] duration metric: createHost completed in 11.002207941s
	I0906 20:20:53.327276  721676 start.go:83] releasing machines lock for "multinode-782472-m02", held for 11.002355337s
	I0906 20:20:53.327348  721676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472-m02
	I0906 20:20:53.348769  721676 out.go:177] * Found network options:
	I0906 20:20:53.351384  721676 out.go:177]   - NO_PROXY=192.168.58.2
	W0906 20:20:53.353509  721676 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 20:20:53.353554  721676 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 20:20:53.353628  721676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:20:53.353678  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:53.353964  721676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:20:53.354016  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:20:53.374159  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:20:53.386391  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:20:53.618194  721676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:20:53.642376  721676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 20:20:53.645926  721676 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0906 20:20:53.645948  721676 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0906 20:20:53.645956  721676 command_runner.go:130] > Device: b3h/179d	Inode: 5449409     Links: 1
	I0906 20:20:53.645981  721676 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:53.645990  721676 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0906 20:20:53.645996  721676 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0906 20:20:53.646002  721676 command_runner.go:130] > Change: 2023-09-06 19:57:06.408535289 +0000
	I0906 20:20:53.646007  721676 command_runner.go:130] >  Birth: 2023-09-06 19:57:06.408535289 +0000
	I0906 20:20:53.646132  721676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:20:53.673154  721676 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:20:53.673236  721676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:20:53.713982  721676 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0906 20:20:53.714143  721676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 20:20:53.714172  721676 start.go:466] detecting cgroup driver to use...
	I0906 20:20:53.714237  721676 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:20:53.714323  721676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:20:53.734285  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:20:53.748028  721676 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:20:53.748142  721676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:20:53.765444  721676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:20:53.787296  721676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:20:53.886326  721676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:20:53.904839  721676 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0906 20:20:53.987275  721676 docker.go:212] disabling docker service ...
	I0906 20:20:53.987356  721676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:20:54.016467  721676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:20:54.033055  721676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:20:54.140409  721676 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0906 20:20:54.140491  721676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:20:54.255697  721676 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0906 20:20:54.255772  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:20:54.270467  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:20:54.289234  721676 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0906 20:20:54.290495  721676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 20:20:54.290624  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:54.303907  721676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:20:54.304014  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:54.316953  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:54.330035  721676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:20:54.342258  721676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:20:54.353931  721676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:20:54.363939  721676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 20:20:54.365098  721676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:20:54.375549  721676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:20:54.477721  721676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:20:54.603436  721676 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:20:54.603506  721676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:20:54.608260  721676 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0906 20:20:54.608281  721676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 20:20:54.608292  721676 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0906 20:20:54.608300  721676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:54.608307  721676 command_runner.go:130] > Access: 2023-09-06 20:20:54.586513888 +0000
	I0906 20:20:54.608314  721676 command_runner.go:130] > Modify: 2023-09-06 20:20:54.586513888 +0000
	I0906 20:20:54.608325  721676 command_runner.go:130] > Change: 2023-09-06 20:20:54.586513888 +0000
	I0906 20:20:54.608336  721676 command_runner.go:130] >  Birth: -
	I0906 20:20:54.608934  721676 start.go:534] Will wait 60s for crictl version
	I0906 20:20:54.608993  721676 ssh_runner.go:195] Run: which crictl
	I0906 20:20:54.613116  721676 command_runner.go:130] > /usr/bin/crictl
	I0906 20:20:54.613562  721676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:20:54.657611  721676 command_runner.go:130] > Version:  0.1.0
	I0906 20:20:54.657630  721676 command_runner.go:130] > RuntimeName:  cri-o
	I0906 20:20:54.657636  721676 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0906 20:20:54.657645  721676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 20:20:54.660611  721676 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:20:54.660696  721676 ssh_runner.go:195] Run: crio --version
	I0906 20:20:54.705569  721676 command_runner.go:130] > crio version 1.24.6
	I0906 20:20:54.705589  721676 command_runner.go:130] > Version:          1.24.6
	I0906 20:20:54.705601  721676 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0906 20:20:54.705607  721676 command_runner.go:130] > GitTreeState:     clean
	I0906 20:20:54.705613  721676 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0906 20:20:54.705619  721676 command_runner.go:130] > GoVersion:        go1.18.2
	I0906 20:20:54.705624  721676 command_runner.go:130] > Compiler:         gc
	I0906 20:20:54.705630  721676 command_runner.go:130] > Platform:         linux/arm64
	I0906 20:20:54.705636  721676 command_runner.go:130] > Linkmode:         dynamic
	I0906 20:20:54.705646  721676 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0906 20:20:54.705655  721676 command_runner.go:130] > SeccompEnabled:   true
	I0906 20:20:54.705661  721676 command_runner.go:130] > AppArmorEnabled:  false
	I0906 20:20:54.707952  721676 ssh_runner.go:195] Run: crio --version
	I0906 20:20:54.751222  721676 command_runner.go:130] > crio version 1.24.6
	I0906 20:20:54.751242  721676 command_runner.go:130] > Version:          1.24.6
	I0906 20:20:54.751251  721676 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0906 20:20:54.751256  721676 command_runner.go:130] > GitTreeState:     clean
	I0906 20:20:54.751263  721676 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0906 20:20:54.751268  721676 command_runner.go:130] > GoVersion:        go1.18.2
	I0906 20:20:54.751273  721676 command_runner.go:130] > Compiler:         gc
	I0906 20:20:54.751283  721676 command_runner.go:130] > Platform:         linux/arm64
	I0906 20:20:54.751289  721676 command_runner.go:130] > Linkmode:         dynamic
	I0906 20:20:54.751304  721676 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0906 20:20:54.751310  721676 command_runner.go:130] > SeccompEnabled:   true
	I0906 20:20:54.751315  721676 command_runner.go:130] > AppArmorEnabled:  false
	I0906 20:20:54.756884  721676 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 20:20:54.759050  721676 out.go:177]   - env NO_PROXY=192.168.58.2
	I0906 20:20:54.760927  721676 cli_runner.go:164] Run: docker network inspect multinode-782472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:20:54.778222  721676 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0906 20:20:54.783659  721676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:20:54.797009  721676 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472 for IP: 192.168.58.3
	I0906 20:20:54.797041  721676 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:20:54.797193  721676 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:20:54.797243  721676 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:20:54.797258  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 20:20:54.797271  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 20:20:54.797288  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 20:20:54.797302  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 20:20:54.797356  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:20:54.797390  721676 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:20:54.797404  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:20:54.797433  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:20:54.797465  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:20:54.797495  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:20:54.797543  721676 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:20:54.797574  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> /usr/share/ca-certificates/6579002.pem
	I0906 20:20:54.797589  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:54.797601  721676 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem -> /usr/share/ca-certificates/657900.pem
	I0906 20:20:54.797939  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:20:54.826947  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:20:54.857151  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:20:54.887651  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:20:54.918865  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:20:54.953070  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:20:54.992135  721676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:20:55.075758  721676 ssh_runner.go:195] Run: openssl version
	I0906 20:20:55.084038  721676 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0906 20:20:55.084176  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:20:55.106337  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:20:55.113987  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:20:55.114744  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:20:55.114814  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:20:55.127064  721676 command_runner.go:130] > 3ec20f2e
	I0906 20:20:55.127194  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:20:55.143735  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:20:55.158495  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:55.165439  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:55.165546  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:55.165633  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:20:55.178654  721676 command_runner.go:130] > b5213941
	I0906 20:20:55.178860  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:20:55.194393  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:20:55.209179  721676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:20:55.214674  721676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:20:55.214791  721676 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:20:55.214854  721676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:20:55.223722  721676 command_runner.go:130] > 51391683
	I0906 20:20:55.224523  721676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:20:55.236936  721676 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:20:55.241677  721676 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 20:20:55.241716  721676 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 20:20:55.241810  721676 ssh_runner.go:195] Run: crio config
	I0906 20:20:55.302432  721676 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0906 20:20:55.302457  721676 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0906 20:20:55.302466  721676 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0906 20:20:55.302470  721676 command_runner.go:130] > #
	I0906 20:20:55.302478  721676 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0906 20:20:55.302486  721676 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0906 20:20:55.302497  721676 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0906 20:20:55.302511  721676 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0906 20:20:55.302516  721676 command_runner.go:130] > # reload'.
	I0906 20:20:55.302528  721676 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0906 20:20:55.302537  721676 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0906 20:20:55.302548  721676 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0906 20:20:55.302556  721676 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0906 20:20:55.302565  721676 command_runner.go:130] > [crio]
	I0906 20:20:55.302572  721676 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0906 20:20:55.302578  721676 command_runner.go:130] > # containers images, in this directory.
	I0906 20:20:55.302587  721676 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0906 20:20:55.302596  721676 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0906 20:20:55.302865  721676 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0906 20:20:55.302881  721676 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0906 20:20:55.302889  721676 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0906 20:20:55.302896  721676 command_runner.go:130] > # storage_driver = "vfs"
	I0906 20:20:55.302902  721676 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0906 20:20:55.302910  721676 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0906 20:20:55.302920  721676 command_runner.go:130] > # storage_option = [
	I0906 20:20:55.303148  721676 command_runner.go:130] > # ]
	I0906 20:20:55.303164  721676 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0906 20:20:55.303172  721676 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0906 20:20:55.303178  721676 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0906 20:20:55.303188  721676 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0906 20:20:55.303199  721676 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0906 20:20:55.303209  721676 command_runner.go:130] > # always happen on a node reboot
	I0906 20:20:55.303215  721676 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0906 20:20:55.303227  721676 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0906 20:20:55.303234  721676 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0906 20:20:55.303243  721676 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0906 20:20:55.303253  721676 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0906 20:20:55.303262  721676 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0906 20:20:55.303272  721676 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0906 20:20:55.303282  721676 command_runner.go:130] > # internal_wipe = true
	I0906 20:20:55.303289  721676 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0906 20:20:55.303297  721676 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0906 20:20:55.303307  721676 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0906 20:20:55.303314  721676 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0906 20:20:55.303321  721676 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0906 20:20:55.303329  721676 command_runner.go:130] > [crio.api]
	I0906 20:20:55.303341  721676 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0906 20:20:55.303353  721676 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0906 20:20:55.303360  721676 command_runner.go:130] > # IP address on which the stream server will listen.
	I0906 20:20:55.303365  721676 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0906 20:20:55.303373  721676 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0906 20:20:55.303382  721676 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0906 20:20:55.303387  721676 command_runner.go:130] > # stream_port = "0"
	I0906 20:20:55.303398  721676 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0906 20:20:55.303404  721676 command_runner.go:130] > # stream_enable_tls = false
	I0906 20:20:55.303415  721676 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0906 20:20:55.303421  721676 command_runner.go:130] > # stream_idle_timeout = ""
	I0906 20:20:55.303432  721676 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0906 20:20:55.303440  721676 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0906 20:20:55.303453  721676 command_runner.go:130] > # minutes.
	I0906 20:20:55.303458  721676 command_runner.go:130] > # stream_tls_cert = ""
	I0906 20:20:55.303466  721676 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0906 20:20:55.303475  721676 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0906 20:20:55.303487  721676 command_runner.go:130] > # stream_tls_key = ""
	I0906 20:20:55.303495  721676 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0906 20:20:55.303506  721676 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0906 20:20:55.303513  721676 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0906 20:20:55.303523  721676 command_runner.go:130] > # stream_tls_ca = ""
	I0906 20:20:55.303532  721676 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0906 20:20:55.303541  721676 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0906 20:20:55.303550  721676 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0906 20:20:55.303555  721676 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0906 20:20:55.303584  721676 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0906 20:20:55.303596  721676 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0906 20:20:55.303601  721676 command_runner.go:130] > [crio.runtime]
	I0906 20:20:55.303613  721676 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0906 20:20:55.303620  721676 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0906 20:20:55.303628  721676 command_runner.go:130] > # "nofile=1024:2048"
	I0906 20:20:55.303635  721676 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0906 20:20:55.303640  721676 command_runner.go:130] > # default_ulimits = [
	I0906 20:20:55.303644  721676 command_runner.go:130] > # ]
	I0906 20:20:55.303652  721676 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0906 20:20:55.303658  721676 command_runner.go:130] > # no_pivot = false
	I0906 20:20:55.303666  721676 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0906 20:20:55.303675  721676 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0906 20:20:55.303684  721676 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0906 20:20:55.303692  721676 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0906 20:20:55.303703  721676 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0906 20:20:55.303712  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 20:20:55.303720  721676 command_runner.go:130] > # conmon = ""
	I0906 20:20:55.303726  721676 command_runner.go:130] > # Cgroup setting for conmon
	I0906 20:20:55.303735  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0906 20:20:55.303743  721676 command_runner.go:130] > conmon_cgroup = "pod"
	I0906 20:20:55.303756  721676 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0906 20:20:55.303766  721676 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0906 20:20:55.303774  721676 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 20:20:55.303782  721676 command_runner.go:130] > # conmon_env = [
	I0906 20:20:55.303787  721676 command_runner.go:130] > # ]
	I0906 20:20:55.303793  721676 command_runner.go:130] > # Additional environment variables to set for all the
	I0906 20:20:55.303802  721676 command_runner.go:130] > # containers. These are overridden if set in the
	I0906 20:20:55.303810  721676 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0906 20:20:55.303820  721676 command_runner.go:130] > # default_env = [
	I0906 20:20:55.303825  721676 command_runner.go:130] > # ]
	I0906 20:20:55.303832  721676 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0906 20:20:55.303838  721676 command_runner.go:130] > # selinux = false
	I0906 20:20:55.303846  721676 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0906 20:20:55.303861  721676 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0906 20:20:55.303868  721676 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0906 20:20:55.303876  721676 command_runner.go:130] > # seccomp_profile = ""
	I0906 20:20:55.303883  721676 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0906 20:20:55.303894  721676 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0906 20:20:55.303902  721676 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0906 20:20:55.303912  721676 command_runner.go:130] > # which might increase security.
	I0906 20:20:55.303918  721676 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0906 20:20:55.303926  721676 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0906 20:20:55.303935  721676 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0906 20:20:55.303945  721676 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0906 20:20:55.303958  721676 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0906 20:20:55.303965  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:55.303974  721676 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0906 20:20:55.303981  721676 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0906 20:20:55.303990  721676 command_runner.go:130] > # the cgroup blockio controller.
	I0906 20:20:55.303995  721676 command_runner.go:130] > # blockio_config_file = ""
	I0906 20:20:55.304003  721676 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0906 20:20:55.304011  721676 command_runner.go:130] > # irqbalance daemon.
	I0906 20:20:55.304268  721676 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0906 20:20:55.304288  721676 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0906 20:20:55.304295  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:55.304305  721676 command_runner.go:130] > # rdt_config_file = ""
	I0906 20:20:55.304311  721676 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0906 20:20:55.304323  721676 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0906 20:20:55.304331  721676 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0906 20:20:55.304339  721676 command_runner.go:130] > # separate_pull_cgroup = ""
	I0906 20:20:55.304347  721676 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0906 20:20:55.304355  721676 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0906 20:20:55.304363  721676 command_runner.go:130] > # will be added.
	I0906 20:20:55.304369  721676 command_runner.go:130] > # default_capabilities = [
	I0906 20:20:55.304373  721676 command_runner.go:130] > # 	"CHOWN",
	I0906 20:20:55.304384  721676 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0906 20:20:55.304391  721676 command_runner.go:130] > # 	"FSETID",
	I0906 20:20:55.304396  721676 command_runner.go:130] > # 	"FOWNER",
	I0906 20:20:55.304402  721676 command_runner.go:130] > # 	"SETGID",
	I0906 20:20:55.304409  721676 command_runner.go:130] > # 	"SETUID",
	I0906 20:20:55.304414  721676 command_runner.go:130] > # 	"SETPCAP",
	I0906 20:20:55.304424  721676 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0906 20:20:55.304429  721676 command_runner.go:130] > # 	"KILL",
	I0906 20:20:55.304433  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304448  721676 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0906 20:20:55.304457  721676 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0906 20:20:55.304466  721676 command_runner.go:130] > # add_inheritable_capabilities = true
	I0906 20:20:55.304474  721676 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0906 20:20:55.304481  721676 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 20:20:55.304488  721676 command_runner.go:130] > # default_sysctls = [
	I0906 20:20:55.304493  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304498  721676 command_runner.go:130] > # List of devices on the host that a
	I0906 20:20:55.304507  721676 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0906 20:20:55.304515  721676 command_runner.go:130] > # allowed_devices = [
	I0906 20:20:55.304520  721676 command_runner.go:130] > # 	"/dev/fuse",
	I0906 20:20:55.304524  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304530  721676 command_runner.go:130] > # List of additional devices. specified as
	I0906 20:20:55.304551  721676 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0906 20:20:55.304562  721676 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0906 20:20:55.304570  721676 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 20:20:55.304577  721676 command_runner.go:130] > # additional_devices = [
	I0906 20:20:55.304844  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304860  721676 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0906 20:20:55.304866  721676 command_runner.go:130] > # cdi_spec_dirs = [
	I0906 20:20:55.304871  721676 command_runner.go:130] > # 	"/etc/cdi",
	I0906 20:20:55.304875  721676 command_runner.go:130] > # 	"/var/run/cdi",
	I0906 20:20:55.304883  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304892  721676 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0906 20:20:55.304904  721676 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0906 20:20:55.304909  721676 command_runner.go:130] > # Defaults to false.
	I0906 20:20:55.304921  721676 command_runner.go:130] > # device_ownership_from_security_context = false
	I0906 20:20:55.304930  721676 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0906 20:20:55.304941  721676 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0906 20:20:55.304946  721676 command_runner.go:130] > # hooks_dir = [
	I0906 20:20:55.304958  721676 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0906 20:20:55.304962  721676 command_runner.go:130] > # ]
	I0906 20:20:55.304970  721676 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0906 20:20:55.304979  721676 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0906 20:20:55.304987  721676 command_runner.go:130] > # its default mounts from the following two files:
	I0906 20:20:55.304992  721676 command_runner.go:130] > #
	I0906 20:20:55.305001  721676 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0906 20:20:55.305011  721676 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0906 20:20:55.305018  721676 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0906 20:20:55.305025  721676 command_runner.go:130] > #
	I0906 20:20:55.305033  721676 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0906 20:20:55.305045  721676 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0906 20:20:55.305054  721676 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0906 20:20:55.305061  721676 command_runner.go:130] > #      only add mounts it finds in this file.
	I0906 20:20:55.305067  721676 command_runner.go:130] > #
	I0906 20:20:55.305073  721676 command_runner.go:130] > # default_mounts_file = ""
	I0906 20:20:55.305082  721676 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0906 20:20:55.305090  721676 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0906 20:20:55.305098  721676 command_runner.go:130] > # pids_limit = 0
	I0906 20:20:55.305105  721676 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0906 20:20:55.305118  721676 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0906 20:20:55.305126  721676 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0906 20:20:55.305146  721676 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0906 20:20:55.305151  721676 command_runner.go:130] > # log_size_max = -1
	I0906 20:20:55.305160  721676 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0906 20:20:55.305167  721676 command_runner.go:130] > # log_to_journald = false
	I0906 20:20:55.305175  721676 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0906 20:20:55.305182  721676 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0906 20:20:55.305192  721676 command_runner.go:130] > # Path to directory for container attach sockets.
	I0906 20:20:55.305199  721676 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0906 20:20:55.305210  721676 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0906 20:20:55.305216  721676 command_runner.go:130] > # bind_mount_prefix = ""
	I0906 20:20:55.305227  721676 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0906 20:20:55.305232  721676 command_runner.go:130] > # read_only = false
	I0906 20:20:55.305240  721676 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0906 20:20:55.305247  721676 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0906 20:20:55.305255  721676 command_runner.go:130] > # live configuration reload.
	I0906 20:20:55.305278  721676 command_runner.go:130] > # log_level = "info"
	I0906 20:20:55.305290  721676 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0906 20:20:55.305296  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:55.305305  721676 command_runner.go:130] > # log_filter = ""
	I0906 20:20:55.305313  721676 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0906 20:20:55.305321  721676 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0906 20:20:55.305329  721676 command_runner.go:130] > # separated by comma.
	I0906 20:20:55.305334  721676 command_runner.go:130] > # uid_mappings = ""
	I0906 20:20:55.305342  721676 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0906 20:20:55.305351  721676 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0906 20:20:55.305358  721676 command_runner.go:130] > # separated by comma.
	I0906 20:20:55.305365  721676 command_runner.go:130] > # gid_mappings = ""
	I0906 20:20:55.305373  721676 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0906 20:20:55.305384  721676 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 20:20:55.305391  721676 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 20:20:55.305400  721676 command_runner.go:130] > # minimum_mappable_uid = -1
	I0906 20:20:55.305407  721676 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0906 20:20:55.305418  721676 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 20:20:55.305426  721676 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 20:20:55.305432  721676 command_runner.go:130] > # minimum_mappable_gid = -1
	I0906 20:20:55.305440  721676 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0906 20:20:55.305448  721676 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0906 20:20:55.305457  721676 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0906 20:20:55.305462  721676 command_runner.go:130] > # ctr_stop_timeout = 30
	I0906 20:20:55.305473  721676 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0906 20:20:55.305481  721676 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0906 20:20:55.305490  721676 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0906 20:20:55.305497  721676 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0906 20:20:55.305502  721676 command_runner.go:130] > # drop_infra_ctr = true
	I0906 20:20:55.305510  721676 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0906 20:20:55.305520  721676 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0906 20:20:55.305529  721676 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0906 20:20:55.305538  721676 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0906 20:20:55.305546  721676 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0906 20:20:55.305554  721676 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0906 20:20:55.305560  721676 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0906 20:20:55.305573  721676 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0906 20:20:55.305578  721676 command_runner.go:130] > # pinns_path = ""
	I0906 20:20:55.305590  721676 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 20:20:55.305598  721676 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0906 20:20:55.305609  721676 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0906 20:20:55.305615  721676 command_runner.go:130] > # default_runtime = "runc"
	I0906 20:20:55.305622  721676 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0906 20:20:55.305631  721676 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0906 20:20:55.305644  721676 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0906 20:20:55.305651  721676 command_runner.go:130] > # creation as a file is not desired either.
	I0906 20:20:55.305665  721676 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0906 20:20:55.305671  721676 command_runner.go:130] > # the hostname is being managed dynamically.
	I0906 20:20:55.305680  721676 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0906 20:20:55.305684  721676 command_runner.go:130] > # ]
	I0906 20:20:55.305692  721676 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0906 20:20:55.305703  721676 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0906 20:20:55.305711  721676 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0906 20:20:55.305719  721676 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0906 20:20:55.305725  721676 command_runner.go:130] > #
	I0906 20:20:55.305731  721676 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0906 20:20:55.305737  721676 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0906 20:20:55.305747  721676 command_runner.go:130] > #  runtime_type = "oci"
	I0906 20:20:55.305753  721676 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0906 20:20:55.305764  721676 command_runner.go:130] > #  privileged_without_host_devices = false
	I0906 20:20:55.305769  721676 command_runner.go:130] > #  allowed_annotations = []
	I0906 20:20:55.305774  721676 command_runner.go:130] > # Where:
	I0906 20:20:55.305785  721676 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0906 20:20:55.305793  721676 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0906 20:20:55.305801  721676 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0906 20:20:55.305809  721676 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0906 20:20:55.305816  721676 command_runner.go:130] > #   in $PATH.
	I0906 20:20:55.305824  721676 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0906 20:20:55.305831  721676 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0906 20:20:55.305843  721676 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0906 20:20:55.305849  721676 command_runner.go:130] > #   state.
	I0906 20:20:55.305899  721676 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0906 20:20:55.305907  721676 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0906 20:20:55.305928  721676 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0906 20:20:55.305939  721676 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0906 20:20:55.305948  721676 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0906 20:20:55.305959  721676 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0906 20:20:55.305965  721676 command_runner.go:130] > #   The currently recognized values are:
	I0906 20:20:55.305977  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0906 20:20:55.305986  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0906 20:20:55.305996  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0906 20:20:55.306004  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0906 20:20:55.306013  721676 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0906 20:20:55.306026  721676 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0906 20:20:55.306034  721676 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0906 20:20:55.306062  721676 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0906 20:20:55.306069  721676 command_runner.go:130] > #   should be moved to the container's cgroup
	I0906 20:20:55.306408  721676 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0906 20:20:55.306428  721676 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0906 20:20:55.306435  721676 command_runner.go:130] > runtime_type = "oci"
	I0906 20:20:55.306440  721676 command_runner.go:130] > runtime_root = "/run/runc"
	I0906 20:20:55.306459  721676 command_runner.go:130] > runtime_config_path = ""
	I0906 20:20:55.306470  721676 command_runner.go:130] > monitor_path = ""
	I0906 20:20:55.306475  721676 command_runner.go:130] > monitor_cgroup = ""
	I0906 20:20:55.306480  721676 command_runner.go:130] > monitor_exec_cgroup = ""
	I0906 20:20:55.306512  721676 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0906 20:20:55.306523  721676 command_runner.go:130] > # running containers
	I0906 20:20:55.306529  721676 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0906 20:20:55.306537  721676 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0906 20:20:55.306545  721676 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0906 20:20:55.306552  721676 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0906 20:20:55.306562  721676 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0906 20:20:55.306568  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0906 20:20:55.306576  721676 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0906 20:20:55.306582  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0906 20:20:55.306591  721676 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0906 20:20:55.306596  721676 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0906 20:20:55.306604  721676 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0906 20:20:55.306616  721676 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0906 20:20:55.306624  721676 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0906 20:20:55.306633  721676 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0906 20:20:55.306645  721676 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0906 20:20:55.306655  721676 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0906 20:20:55.306666  721676 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0906 20:20:55.306678  721676 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0906 20:20:55.306686  721676 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0906 20:20:55.306698  721676 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0906 20:20:55.306703  721676 command_runner.go:130] > # Example:
	I0906 20:20:55.306709  721676 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0906 20:20:55.306714  721676 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0906 20:20:55.306723  721676 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0906 20:20:55.306731  721676 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0906 20:20:55.306738  721676 command_runner.go:130] > # cpuset = 0
	I0906 20:20:55.306743  721676 command_runner.go:130] > # cpushares = "0-1"
	I0906 20:20:55.306747  721676 command_runner.go:130] > # Where:
	I0906 20:20:55.306752  721676 command_runner.go:130] > # The workload name is workload-type.
	I0906 20:20:55.306763  721676 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0906 20:20:55.306772  721676 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0906 20:20:55.306779  721676 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0906 20:20:55.306792  721676 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0906 20:20:55.306799  721676 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0906 20:20:55.306803  721676 command_runner.go:130] > # 
	I0906 20:20:55.306812  721676 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0906 20:20:55.306818  721676 command_runner.go:130] > #
	I0906 20:20:55.306825  721676 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0906 20:20:55.306835  721676 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0906 20:20:55.306844  721676 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0906 20:20:55.306864  721676 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0906 20:20:55.306884  721676 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0906 20:20:55.306889  721676 command_runner.go:130] > [crio.image]
	I0906 20:20:55.306897  721676 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0906 20:20:55.306970  721676 command_runner.go:130] > # default_transport = "docker://"
	I0906 20:20:55.306985  721676 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0906 20:20:55.306994  721676 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0906 20:20:55.307003  721676 command_runner.go:130] > # global_auth_file = ""
	I0906 20:20:55.307023  721676 command_runner.go:130] > # The image used to instantiate infra containers.
	I0906 20:20:55.307037  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:55.307044  721676 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0906 20:20:55.307052  721676 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0906 20:20:55.307064  721676 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0906 20:20:55.307071  721676 command_runner.go:130] > # This option supports live configuration reload.
	I0906 20:20:55.307076  721676 command_runner.go:130] > # pause_image_auth_file = ""
	I0906 20:20:55.307083  721676 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0906 20:20:55.307092  721676 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0906 20:20:55.307100  721676 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0906 20:20:55.307110  721676 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0906 20:20:55.307116  721676 command_runner.go:130] > # pause_command = "/pause"
	I0906 20:20:55.307125  721676 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0906 20:20:55.307136  721676 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0906 20:20:55.307144  721676 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0906 20:20:55.307155  721676 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0906 20:20:55.307161  721676 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0906 20:20:55.307172  721676 command_runner.go:130] > # signature_policy = ""
	I0906 20:20:55.307198  721676 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0906 20:20:55.307208  721676 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0906 20:20:55.307213  721676 command_runner.go:130] > # changing them here.
	I0906 20:20:55.307222  721676 command_runner.go:130] > # insecure_registries = [
	I0906 20:20:55.307252  721676 command_runner.go:130] > # ]
	I0906 20:20:55.307287  721676 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0906 20:20:55.307295  721676 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0906 20:20:55.307300  721676 command_runner.go:130] > # image_volumes = "mkdir"
	I0906 20:20:55.307312  721676 command_runner.go:130] > # Temporary directory to use for storing big files
	I0906 20:20:55.307318  721676 command_runner.go:130] > # big_files_temporary_dir = ""
	I0906 20:20:55.307328  721676 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0906 20:20:55.307333  721676 command_runner.go:130] > # CNI plugins.
	I0906 20:20:55.307338  721676 command_runner.go:130] > [crio.network]
	I0906 20:20:55.307345  721676 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0906 20:20:55.307352  721676 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0906 20:20:55.307357  721676 command_runner.go:130] > # cni_default_network = ""
	I0906 20:20:55.307369  721676 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0906 20:20:55.307378  721676 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0906 20:20:55.307386  721676 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0906 20:20:55.307391  721676 command_runner.go:130] > # plugin_dirs = [
	I0906 20:20:55.307402  721676 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0906 20:20:55.307406  721676 command_runner.go:130] > # ]
	I0906 20:20:55.307415  721676 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0906 20:20:55.307420  721676 command_runner.go:130] > [crio.metrics]
	I0906 20:20:55.307427  721676 command_runner.go:130] > # Globally enable or disable metrics support.
	I0906 20:20:55.307435  721676 command_runner.go:130] > # enable_metrics = false
	I0906 20:20:55.307441  721676 command_runner.go:130] > # Specify enabled metrics collectors.
	I0906 20:20:55.307447  721676 command_runner.go:130] > # Per default all metrics are enabled.
	I0906 20:20:55.307457  721676 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0906 20:20:55.307465  721676 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0906 20:20:55.307475  721676 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0906 20:20:55.307481  721676 command_runner.go:130] > # metrics_collectors = [
	I0906 20:20:55.307672  721676 command_runner.go:130] > # 	"operations",
	I0906 20:20:55.307687  721676 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0906 20:20:55.307694  721676 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0906 20:20:55.307699  721676 command_runner.go:130] > # 	"operations_errors",
	I0906 20:20:55.307704  721676 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0906 20:20:55.307713  721676 command_runner.go:130] > # 	"image_pulls_by_name",
	I0906 20:20:55.307738  721676 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0906 20:20:55.307744  721676 command_runner.go:130] > # 	"image_pulls_failures",
	I0906 20:20:55.307755  721676 command_runner.go:130] > # 	"image_pulls_successes",
	I0906 20:20:55.307761  721676 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0906 20:20:55.307766  721676 command_runner.go:130] > # 	"image_layer_reuse",
	I0906 20:20:55.307771  721676 command_runner.go:130] > # 	"containers_oom_total",
	I0906 20:20:55.307778  721676 command_runner.go:130] > # 	"containers_oom",
	I0906 20:20:55.307783  721676 command_runner.go:130] > # 	"processes_defunct",
	I0906 20:20:55.307791  721676 command_runner.go:130] > # 	"operations_total",
	I0906 20:20:55.307796  721676 command_runner.go:130] > # 	"operations_latency_seconds",
	I0906 20:20:55.307803  721676 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0906 20:20:55.307811  721676 command_runner.go:130] > # 	"operations_errors_total",
	I0906 20:20:55.307817  721676 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0906 20:20:55.307826  721676 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0906 20:20:55.307831  721676 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0906 20:20:55.307837  721676 command_runner.go:130] > # 	"image_pulls_success_total",
	I0906 20:20:55.307843  721676 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0906 20:20:55.307848  721676 command_runner.go:130] > # 	"containers_oom_count_total",
	I0906 20:20:55.307855  721676 command_runner.go:130] > # ]
	I0906 20:20:55.307861  721676 command_runner.go:130] > # The port on which the metrics server will listen.
	I0906 20:20:55.307868  721676 command_runner.go:130] > # metrics_port = 9090
	I0906 20:20:55.307874  721676 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0906 20:20:55.307879  721676 command_runner.go:130] > # metrics_socket = ""
	I0906 20:20:55.307885  721676 command_runner.go:130] > # The certificate for the secure metrics server.
	I0906 20:20:55.307895  721676 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0906 20:20:55.307903  721676 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0906 20:20:55.307912  721676 command_runner.go:130] > # certificate on any modification event.
	I0906 20:20:55.307917  721676 command_runner.go:130] > # metrics_cert = ""
	I0906 20:20:55.307925  721676 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0906 20:20:55.307932  721676 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0906 20:20:55.307936  721676 command_runner.go:130] > # metrics_key = ""
	I0906 20:20:55.307945  721676 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0906 20:20:55.307950  721676 command_runner.go:130] > [crio.tracing]
	I0906 20:20:55.307960  721676 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0906 20:20:55.307965  721676 command_runner.go:130] > # enable_tracing = false
	I0906 20:20:55.307972  721676 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0906 20:20:55.307980  721676 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0906 20:20:55.307986  721676 command_runner.go:130] > # Number of samples to collect per million spans.
	I0906 20:20:55.308167  721676 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0906 20:20:55.308185  721676 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0906 20:20:55.308190  721676 command_runner.go:130] > [crio.stats]
	I0906 20:20:55.308197  721676 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0906 20:20:55.308204  721676 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0906 20:20:55.308209  721676 command_runner.go:130] > # stats_collection_period = 0
	I0906 20:20:55.309954  721676 command_runner.go:130] ! time="2023-09-06 20:20:55.299934880Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0906 20:20:55.309975  721676 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0906 20:20:55.310367  721676 cni.go:84] Creating CNI manager for ""
	I0906 20:20:55.310386  721676 cni.go:136] 2 nodes found, recommending kindnet
	I0906 20:20:55.310396  721676 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:20:55.310415  721676 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-782472 NodeName:multinode-782472-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:20:55.310545  721676 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-782472-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:20:55.310599  721676 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-782472-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:20:55.310669  721676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 20:20:55.320407  721676 command_runner.go:130] > kubeadm
	I0906 20:20:55.320422  721676 command_runner.go:130] > kubectl
	I0906 20:20:55.320428  721676 command_runner.go:130] > kubelet
	I0906 20:20:55.321472  721676 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:20:55.321538  721676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0906 20:20:55.332365  721676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0906 20:20:55.356287  721676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:20:55.380131  721676 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:20:55.385002  721676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:20:55.399161  721676 host.go:66] Checking if "multinode-782472" exists ...
	I0906 20:20:55.399433  721676 start.go:301] JoinCluster: &{Name:multinode-782472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-782472 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:20:55.399524  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 20:20:55.399574  721676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:20:55.399964  721676 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:20:55.418785  721676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:20:55.593275  721676 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 06bp25.6bgsc77pdbw3uynx --discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a 
	I0906 20:20:55.593329  721676 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0906 20:20:55.593360  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 06bp25.6bgsc77pdbw3uynx --discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-782472-m02"
	I0906 20:20:55.639730  721676 command_runner.go:130] > [preflight] Running pre-flight checks
	I0906 20:20:55.677702  721676 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0906 20:20:55.677726  721676 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0906 20:20:55.677733  721676 command_runner.go:130] > OS: Linux
	I0906 20:20:55.677740  721676 command_runner.go:130] > CGROUPS_CPU: enabled
	I0906 20:20:55.677752  721676 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0906 20:20:55.677759  721676 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0906 20:20:55.677771  721676 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0906 20:20:55.677781  721676 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0906 20:20:55.677787  721676 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0906 20:20:55.677799  721676 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0906 20:20:55.677806  721676 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0906 20:20:55.677817  721676 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0906 20:20:55.790439  721676 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0906 20:20:55.791223  721676 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0906 20:20:55.828494  721676 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:20:55.828731  721676 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:20:55.829134  721676 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 20:20:55.938271  721676 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0906 20:20:58.960785  721676 command_runner.go:130] > This node has joined the cluster:
	I0906 20:20:58.960806  721676 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0906 20:20:58.960814  721676 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0906 20:20:58.960822  721676 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0906 20:20:58.964141  721676 command_runner.go:130] ! W0906 20:20:55.639174    1024 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0906 20:20:58.964170  721676 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0906 20:20:58.964183  721676 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:20:58.964206  721676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 06bp25.6bgsc77pdbw3uynx --discovery-token-ca-cert-hash sha256:925f63182e76e2af8a48585abf1c88b69bde0aecb697a8f6aa9904972710d54a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-782472-m02": (3.370825159s)
	I0906 20:20:58.964221  721676 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0906 20:20:59.080035  721676 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0906 20:20:59.183957  721676 start.go:303] JoinCluster complete in 3.784518403s
	I0906 20:20:59.183981  721676 cni.go:84] Creating CNI manager for ""
	I0906 20:20:59.183995  721676 cni.go:136] 2 nodes found, recommending kindnet
	I0906 20:20:59.184052  721676 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:20:59.189360  721676 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 20:20:59.189382  721676 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0906 20:20:59.189391  721676 command_runner.go:130] > Device: 3ah/58d	Inode: 5453116     Links: 1
	I0906 20:20:59.189398  721676 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 20:20:59.189430  721676 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0906 20:20:59.189454  721676 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0906 20:20:59.189472  721676 command_runner.go:130] > Change: 2023-09-06 19:57:07.056534413 +0000
	I0906 20:20:59.189480  721676 command_runner.go:130] >  Birth: 2023-09-06 19:57:07.016534467 +0000
	I0906 20:20:59.189556  721676 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 20:20:59.189564  721676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:20:59.218446  721676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:20:59.582757  721676 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 20:20:59.594220  721676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 20:20:59.598760  721676 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 20:20:59.627194  721676 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 20:20:59.630015  721676 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:59.630336  721676 kapi.go:59] client config for multinode-782472: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:20:59.630664  721676 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0906 20:20:59.630673  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:59.630682  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:59.630689  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:59.633833  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:59.633862  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:59.633872  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:59.633879  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:59.633886  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:59.633892  721676 round_trippers.go:580]     Content-Length: 291
	I0906 20:20:59.633899  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:59 GMT
	I0906 20:20:59.633905  721676 round_trippers.go:580]     Audit-Id: 306f9acc-3b62-41ba-8cfe-a4eec97ae6d1
	I0906 20:20:59.633912  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:59.634174  721676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88a10611-e857-48eb-b81e-bdcb9cbcce00","resourceVersion":"404","creationTimestamp":"2023-09-06T20:20:24Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0906 20:20:59.634306  721676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-782472" context rescaled to 1 replicas
	I0906 20:20:59.634357  721676 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0906 20:20:59.636915  721676 out.go:177] * Verifying Kubernetes components...
	I0906 20:20:59.638934  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:20:59.668348  721676 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:20:59.668617  721676 kapi.go:59] client config for multinode-782472: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/multinode-782472/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:20:59.668902  721676 node_ready.go:35] waiting up to 6m0s for node "multinode-782472-m02" to be "Ready" ...
	I0906 20:20:59.668975  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:20:59.668985  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:59.668994  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:59.669002  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:59.671842  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:20:59.671871  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:59.671881  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:59.671888  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:59 GMT
	I0906 20:20:59.671895  721676 round_trippers.go:580]     Audit-Id: 4aac94b8-6ff3-4f85-a27a-55bca0564c4a
	I0906 20:20:59.671902  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:59.671915  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:59.671922  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:59.672471  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"442","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0906 20:20:59.672914  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:20:59.672930  721676 round_trippers.go:469] Request Headers:
	I0906 20:20:59.672940  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:20:59.672952  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:20:59.675984  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:20:59.676013  721676 round_trippers.go:577] Response Headers:
	I0906 20:20:59.676022  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:20:59.676029  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:20:59.676036  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:20:59.676074  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:20:59.676089  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:20:59 GMT
	I0906 20:20:59.676096  721676 round_trippers.go:580]     Audit-Id: f6d15b88-a021-48fc-a8c9-dad3c91b646c
	I0906 20:20:59.677544  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"442","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0906 20:21:00.192202  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:00.192228  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:00.192239  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:00.192246  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:00.208852  721676 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0906 20:21:00.208887  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:00.208898  721676 round_trippers.go:580]     Audit-Id: 379ec3b8-0509-4a2a-ba36-7f462bec87fc
	I0906 20:21:00.208907  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:00.208914  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:00.208920  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:00.208927  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:00.209043  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:00 GMT
	I0906 20:21:00.212856  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"442","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0906 20:21:00.678969  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:00.678993  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:00.679005  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:00.679012  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:00.681456  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:00.681482  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:00.681491  721676 round_trippers.go:580]     Audit-Id: 34e06851-57d3-4127-bef6-918436d66df5
	I0906 20:21:00.681498  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:00.681522  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:00.681532  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:00.681543  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:00.681551  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:00 GMT
	I0906 20:21:00.682021  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"442","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0906 20:21:01.178249  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:01.178274  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:01.178283  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:01.178291  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:01.180804  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:01.180859  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:01.180868  721676 round_trippers.go:580]     Audit-Id: d46aa9ad-d3eb-4eff-b54d-ce03f3c6cbcf
	I0906 20:21:01.180875  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:01.180882  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:01.180891  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:01.180902  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:01.180916  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:01 GMT
	I0906 20:21:01.181044  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"442","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0906 20:21:01.678537  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:01.678562  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:01.678575  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:01.678583  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:01.681239  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:01.681263  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:01.681272  721676 round_trippers.go:580]     Audit-Id: deb725d3-33ef-472b-903d-a36b1450c6c0
	I0906 20:21:01.681279  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:01.681286  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:01.681292  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:01.681299  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:01.681305  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:01 GMT
	I0906 20:21:01.681798  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:01.682254  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:02.178999  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:02.179024  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:02.179036  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:02.179043  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:02.181729  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:02.181751  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:02.181760  721676 round_trippers.go:580]     Audit-Id: c573f7a1-c0fb-4715-8ff2-05e9cb6e8716
	I0906 20:21:02.181767  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:02.181773  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:02.181780  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:02.181789  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:02.181795  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:02 GMT
	I0906 20:21:02.181958  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:02.679124  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:02.679149  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:02.679159  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:02.679168  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:02.681904  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:02.681930  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:02.681941  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:02 GMT
	I0906 20:21:02.681948  721676 round_trippers.go:580]     Audit-Id: 1ae40046-6e7c-4ec3-b6b8-d7f1fa9177cf
	I0906 20:21:02.681959  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:02.681967  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:02.681979  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:02.681997  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:02.682157  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:03.178855  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:03.178878  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:03.178889  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:03.178896  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:03.181694  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:03.181720  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:03.181731  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:03.181743  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:03.181753  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:03 GMT
	I0906 20:21:03.181765  721676 round_trippers.go:580]     Audit-Id: 8a91c0e5-7d72-4868-b65f-7c71fc2d70f2
	I0906 20:21:03.181772  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:03.181783  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:03.181940  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:03.678649  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:03.678670  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:03.678681  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:03.678689  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:03.681352  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:03.681387  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:03.681397  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:03.681406  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:03 GMT
	I0906 20:21:03.681413  721676 round_trippers.go:580]     Audit-Id: 14544c05-968f-4cc2-abeb-ed7523a8b463
	I0906 20:21:03.681419  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:03.681426  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:03.681433  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:03.681540  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:04.178723  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:04.178749  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:04.178760  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:04.178768  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:04.181453  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:04.181487  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:04.181501  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:04.181508  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:04.181518  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:04.181535  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:04.181543  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:04 GMT
	I0906 20:21:04.181550  721676 round_trippers.go:580]     Audit-Id: 3aa251f1-7ac1-42b0-8812-92347e9df279
	I0906 20:21:04.181830  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:04.182266  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:04.678222  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:04.678246  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:04.678256  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:04.678264  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:04.680728  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:04.680749  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:04.680757  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:04.680765  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:04.680772  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:04.680779  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:04 GMT
	I0906 20:21:04.680785  721676 round_trippers.go:580]     Audit-Id: 4a29dc2a-4e86-4d07-b072-ee6e9f5b096e
	I0906 20:21:04.680793  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:04.681076  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:05.178801  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:05.178831  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:05.178844  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:05.178851  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:05.181490  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:05.181518  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:05.181527  721676 round_trippers.go:580]     Audit-Id: 9a78b6c2-67cf-412c-8620-a1f249d9419f
	I0906 20:21:05.181534  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:05.181541  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:05.181601  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:05.181612  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:05.181622  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:05 GMT
	I0906 20:21:05.181769  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:05.678960  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:05.679002  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:05.679015  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:05.679023  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:05.681538  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:05.681558  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:05.681567  721676 round_trippers.go:580]     Audit-Id: ec9a836d-20ba-4a59-962c-995534951ab6
	I0906 20:21:05.681574  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:05.681580  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:05.681587  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:05.681594  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:05.681604  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:05 GMT
	I0906 20:21:05.681699  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:06.178844  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:06.178868  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:06.178879  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:06.178887  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:06.181475  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:06.181502  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:06.181513  721676 round_trippers.go:580]     Audit-Id: 330cc57e-b045-49e3-98b0-d37e1f966df1
	I0906 20:21:06.181520  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:06.181527  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:06.181534  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:06.181548  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:06.181558  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:06 GMT
	I0906 20:21:06.181867  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:06.678352  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:06.678375  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:06.678384  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:06.678392  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:06.681095  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:06.681117  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:06.681127  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:06.681134  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:06 GMT
	I0906 20:21:06.681141  721676 round_trippers.go:580]     Audit-Id: d754b5e1-be0c-40f2-9125-5e10244c6e16
	I0906 20:21:06.681147  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:06.681154  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:06.681161  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:06.681283  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:06.681687  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:07.178675  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:07.178700  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:07.178711  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:07.178719  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:07.181250  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:07.181270  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:07.181279  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:07.181286  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:07 GMT
	I0906 20:21:07.181293  721676 round_trippers.go:580]     Audit-Id: 0ab06c2b-a9fe-4e80-93ab-69689581f069
	I0906 20:21:07.181299  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:07.181306  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:07.181313  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:07.181412  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:07.678772  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:07.678803  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:07.678818  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:07.678826  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:07.681607  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:07.681627  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:07.681637  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:07.681644  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:07.681650  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:07 GMT
	I0906 20:21:07.681657  721676 round_trippers.go:580]     Audit-Id: 2b22cfff-1ddf-4e30-bb42-664be5d127aa
	I0906 20:21:07.681663  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:07.681671  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:07.681771  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:08.178998  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:08.179045  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:08.179056  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:08.179063  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:08.181622  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:08.181643  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:08.181652  721676 round_trippers.go:580]     Audit-Id: 5777f4ea-3cd4-44ba-910a-a64aae378eb1
	I0906 20:21:08.181660  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:08.181667  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:08.181673  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:08.181679  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:08.181687  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:08 GMT
	I0906 20:21:08.181840  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:08.679189  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:08.679215  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:08.679225  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:08.679233  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:08.681758  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:08.681789  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:08.681798  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:08.681807  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:08 GMT
	I0906 20:21:08.681813  721676 round_trippers.go:580]     Audit-Id: c2107140-5394-416a-9f82-b7c4866474c0
	I0906 20:21:08.681819  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:08.681826  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:08.681833  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:08.681945  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"458","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0906 20:21:08.682333  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:09.179100  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:09.179124  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:09.179134  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:09.179142  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:09.181708  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:09.181735  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:09.181745  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:09.181753  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:09.181759  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:09 GMT
	I0906 20:21:09.181766  721676 round_trippers.go:580]     Audit-Id: 0b054e78-1303-46c6-a814-10758eb1de33
	I0906 20:21:09.181773  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:09.181782  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:09.181966  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:09.678476  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:09.678501  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:09.678512  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:09.678520  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:09.682543  721676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 20:21:09.682572  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:09.682582  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:09.682589  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:09 GMT
	I0906 20:21:09.682596  721676 round_trippers.go:580]     Audit-Id: 450c79f7-9b8c-4438-b271-fbc370ff648a
	I0906 20:21:09.682606  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:09.682613  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:09.682619  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:09.682778  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:10.178243  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:10.178268  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:10.178278  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:10.178286  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:10.181321  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:21:10.181377  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:10.181386  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:10.181393  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:10 GMT
	I0906 20:21:10.181440  721676 round_trippers.go:580]     Audit-Id: ff6a6586-2197-411c-a228-99312241b370
	I0906 20:21:10.181448  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:10.181455  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:10.181462  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:10.181580  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:10.679145  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:10.679174  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:10.679185  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:10.679195  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:10.682009  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:10.682034  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:10.682066  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:10.682074  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:10.682080  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:10.682087  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:10.682094  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:10 GMT
	I0906 20:21:10.682104  721676 round_trippers.go:580]     Audit-Id: 23e2d6e9-fd00-4989-a6f2-755b8bcb4761
	I0906 20:21:10.682282  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:10.682660  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:11.178968  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:11.178991  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:11.179001  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:11.179009  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:11.181586  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:11.181613  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:11.181628  721676 round_trippers.go:580]     Audit-Id: b8b7f0af-d54a-4d01-8180-ad64cdf97a28
	I0906 20:21:11.181636  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:11.181643  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:11.181650  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:11.181665  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:11.181672  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:11 GMT
	I0906 20:21:11.182030  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:11.678753  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:11.678818  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:11.678836  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:11.678844  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:11.681305  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:11.681329  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:11.681338  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:11.681345  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:11 GMT
	I0906 20:21:11.681351  721676 round_trippers.go:580]     Audit-Id: bfaefc2b-0569-40e4-b76b-255a64b31626
	I0906 20:21:11.681359  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:11.681368  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:11.681377  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:11.681609  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:12.178767  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:12.178790  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:12.178801  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:12.178809  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:12.181345  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:12.181405  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:12.181427  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:12.181450  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:12.181487  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:12 GMT
	I0906 20:21:12.181514  721676 round_trippers.go:580]     Audit-Id: 632840d1-a9cd-4547-99f6-2aa26b50c83b
	I0906 20:21:12.181536  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:12.181557  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:12.181712  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:12.678256  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:12.678279  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:12.678289  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:12.678297  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:12.681255  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:12.681289  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:12.681299  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:12.681306  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:12.681316  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:12.681322  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:12.681340  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:12 GMT
	I0906 20:21:12.681350  721676 round_trippers.go:580]     Audit-Id: d48b6e5d-92fc-4418-8f09-e53c770da166
	I0906 20:21:12.681477  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:13.179042  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:13.179065  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:13.179083  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:13.179095  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:13.181762  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:13.181800  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:13.181809  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:13.181816  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:13.181823  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:13 GMT
	I0906 20:21:13.181829  721676 round_trippers.go:580]     Audit-Id: 17cc6905-2e70-40cd-9d8b-c467543df5c2
	I0906 20:21:13.181836  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:13.181889  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:13.182195  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:13.182686  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:13.678190  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:13.678218  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:13.678228  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:13.678235  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:13.680770  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:13.680795  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:13.680804  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:13.680812  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:13.680818  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:13.680825  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:13.680832  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:13 GMT
	I0906 20:21:13.680839  721676 round_trippers.go:580]     Audit-Id: cf1f44ab-81e6-4622-8193-84a4d2290925
	I0906 20:21:13.680966  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:14.179072  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:14.179093  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:14.179103  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:14.179110  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:14.181656  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:14.181675  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:14.181684  721676 round_trippers.go:580]     Audit-Id: 721a9d30-9c72-4082-977a-dadb49b24cf6
	I0906 20:21:14.181691  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:14.181698  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:14.181704  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:14.181711  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:14.181718  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:14 GMT
	I0906 20:21:14.181825  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:14.679003  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:14.679025  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:14.679035  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:14.679042  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:14.681454  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:14.681481  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:14.681491  721676 round_trippers.go:580]     Audit-Id: aebf0bdc-08cb-40a7-b52d-47315d5459a3
	I0906 20:21:14.681498  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:14.681504  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:14.681512  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:14.681521  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:14.681528  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:14 GMT
	I0906 20:21:14.681650  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:15.178838  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:15.178864  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:15.178876  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:15.178883  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:15.181761  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:15.181786  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:15.181796  721676 round_trippers.go:580]     Audit-Id: 4792f033-d3ba-46ee-a673-aeeed5c52f08
	I0906 20:21:15.181805  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:15.181812  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:15.181819  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:15.181827  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:15.181835  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:15 GMT
	I0906 20:21:15.181974  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:15.678187  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:15.678213  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:15.678224  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:15.678232  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:15.680841  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:15.680869  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:15.680879  721676 round_trippers.go:580]     Audit-Id: c45b41d6-f6f3-47bc-a1ab-79cd3da59654
	I0906 20:21:15.680887  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:15.680893  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:15.680900  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:15.680907  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:15.680916  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:15 GMT
	I0906 20:21:15.681032  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:15.681407  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:16.178738  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:16.178760  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:16.178770  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:16.178778  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:16.181159  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:16.181185  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:16.181194  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:16 GMT
	I0906 20:21:16.181201  721676 round_trippers.go:580]     Audit-Id: 8a9e55a2-e853-4046-b224-cf4b968500fa
	I0906 20:21:16.181207  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:16.181214  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:16.181220  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:16.181231  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:16.181348  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:16.678230  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:16.678253  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:16.678263  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:16.678272  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:16.680787  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:16.680810  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:16.680819  721676 round_trippers.go:580]     Audit-Id: 0bd8de1c-2f6a-435e-af42-7d554b688f4d
	I0906 20:21:16.680826  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:16.680833  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:16.680840  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:16.680847  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:16.680856  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:16 GMT
	I0906 20:21:16.681273  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:17.178217  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:17.178244  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:17.178255  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:17.178263  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:17.180810  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:17.180832  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:17.180841  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:17.180848  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:17.180855  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:17.180862  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:17 GMT
	I0906 20:21:17.180869  721676 round_trippers.go:580]     Audit-Id: 4fc76ac6-1cb6-4d61-8fef-56ee77788e5f
	I0906 20:21:17.180875  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:17.181066  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:17.678902  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:17.678925  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:17.678935  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:17.678949  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:17.681627  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:17.681650  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:17.681659  721676 round_trippers.go:580]     Audit-Id: f14cdc56-08ec-4cbe-8213-5990c5d35789
	I0906 20:21:17.681666  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:17.681672  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:17.681679  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:17.681685  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:17.681692  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:17 GMT
	I0906 20:21:17.681878  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:17.682319  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:18.178460  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:18.178487  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:18.178498  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:18.178505  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:18.181281  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:18.181305  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:18.181314  721676 round_trippers.go:580]     Audit-Id: 0df0ace3-415c-408a-acbc-b03137eee3da
	I0906 20:21:18.181321  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:18.181328  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:18.181334  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:18.181341  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:18.181347  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:18 GMT
	I0906 20:21:18.181502  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:18.678250  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:18.678275  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:18.678286  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:18.678293  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:18.680827  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:18.680853  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:18.680862  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:18.680869  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:18.680876  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:18 GMT
	I0906 20:21:18.680882  721676 round_trippers.go:580]     Audit-Id: aa3c11eb-6c0f-4027-807e-0cc28f3b768b
	I0906 20:21:18.680889  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:18.680898  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:18.681031  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:19.178131  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:19.178152  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:19.178162  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:19.178169  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:19.180612  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:19.180633  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:19.180641  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:19.180649  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:19 GMT
	I0906 20:21:19.180656  721676 round_trippers.go:580]     Audit-Id: c0918f77-90aa-4122-b869-004fb6a71d53
	I0906 20:21:19.180663  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:19.180670  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:19.180676  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:19.180800  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:19.678777  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:19.678800  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:19.678810  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:19.678818  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:19.681368  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:19.681393  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:19.681401  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:19.681408  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:19.681415  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:19.681422  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:19 GMT
	I0906 20:21:19.681428  721676 round_trippers.go:580]     Audit-Id: ed08b611-826e-463b-819a-c4d15df368e1
	I0906 20:21:19.681435  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:19.681518  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:20.178276  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:20.178304  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:20.178314  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:20.178322  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:20.181106  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:20.181133  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:20.181145  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:20.181154  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:20.181162  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:20 GMT
	I0906 20:21:20.181170  721676 round_trippers.go:580]     Audit-Id: 2242bfa2-fbac-4f2b-9995-db31ef28832d
	I0906 20:21:20.181181  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:20.181190  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:20.181368  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:20.181939  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:20.678737  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:20.678768  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:20.678782  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:20.678798  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:20.681419  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:20.681444  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:20.681453  721676 round_trippers.go:580]     Audit-Id: c76b0b58-7633-42f6-81e1-c434e7caa35e
	I0906 20:21:20.681460  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:20.681466  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:20.681473  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:20.681479  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:20.681488  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:20 GMT
	I0906 20:21:20.681608  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:21.178810  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:21.178835  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:21.178845  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:21.178852  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:21.181234  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:21.181257  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:21.181267  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:21.181274  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:21.181281  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:21 GMT
	I0906 20:21:21.181287  721676 round_trippers.go:580]     Audit-Id: d0bf669c-daaf-462c-a11f-175f8a590b0a
	I0906 20:21:21.181294  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:21.181305  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:21.181642  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:21.678750  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:21.678774  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:21.678784  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:21.678793  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:21.681300  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:21.681322  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:21.681331  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:21.681338  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:21.681345  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:21 GMT
	I0906 20:21:21.681354  721676 round_trippers.go:580]     Audit-Id: 5c562612-cd6b-4185-8aa1-1587a6e95c69
	I0906 20:21:21.681361  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:21.681367  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:21.681487  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:22.178249  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:22.178274  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:22.178284  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:22.178292  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:22.180825  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:22.180852  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:22.180861  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:22.180868  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:22.180875  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:22.180881  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:22.180888  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:22 GMT
	I0906 20:21:22.180895  721676 round_trippers.go:580]     Audit-Id: 7a14b3c4-33e1-4ad1-908a-eba506e07f68
	I0906 20:21:22.181024  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:22.679057  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:22.679082  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:22.679093  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:22.679101  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:22.681435  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:22.681462  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:22.681471  721676 round_trippers.go:580]     Audit-Id: 63da9607-c45e-4c2c-bca8-a7afb04acdcd
	I0906 20:21:22.681478  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:22.681484  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:22.681491  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:22.681503  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:22.681514  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:22 GMT
	I0906 20:21:22.681597  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:22.681962  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:23.178763  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:23.178785  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:23.178795  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:23.178803  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:23.181554  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:23.181611  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:23.181621  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:23.181629  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:23 GMT
	I0906 20:21:23.181635  721676 round_trippers.go:580]     Audit-Id: 08c32fec-a171-41c2-a364-e9298ec4ee26
	I0906 20:21:23.181645  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:23.181652  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:23.181658  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:23.181804  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:23.678423  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:23.678448  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:23.678459  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:23.678466  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:23.681312  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:23.681347  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:23.681358  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:23 GMT
	I0906 20:21:23.681365  721676 round_trippers.go:580]     Audit-Id: 2c43631f-9ba2-477e-b13c-603b0a06cef1
	I0906 20:21:23.681372  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:23.681382  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:23.681389  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:23.681400  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:23.681672  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:24.178292  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:24.178316  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:24.178326  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:24.178334  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:24.180940  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:24.180966  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:24.180975  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:24.180982  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:24.180990  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:24 GMT
	I0906 20:21:24.180997  721676 round_trippers.go:580]     Audit-Id: 0ae0bc29-b1b0-4359-a502-d8dc115d4235
	I0906 20:21:24.181004  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:24.181015  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:24.181347  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:24.678247  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:24.678272  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:24.678283  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:24.678291  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:24.681241  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:24.681262  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:24.681271  721676 round_trippers.go:580]     Audit-Id: f09a411d-b9e2-4737-9278-e66bdd056a6f
	I0906 20:21:24.681278  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:24.681285  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:24.681291  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:24.681298  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:24.681305  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:24 GMT
	I0906 20:21:24.681463  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:25.178147  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:25.178172  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:25.178189  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:25.178197  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:25.181106  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:25.181130  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:25.181139  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:25.181146  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:25 GMT
	I0906 20:21:25.181153  721676 round_trippers.go:580]     Audit-Id: 301f07f1-c1b8-40c6-80f8-44f67915c8fb
	I0906 20:21:25.181160  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:25.181167  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:25.181177  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:25.181293  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:25.181691  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:25.678432  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:25.678458  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:25.678469  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:25.678476  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:25.680998  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:25.681023  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:25.681033  721676 round_trippers.go:580]     Audit-Id: 430899b9-27a4-4f41-b153-169eafe80fd3
	I0906 20:21:25.681039  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:25.681046  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:25.681053  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:25.681060  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:25.681074  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:25 GMT
	I0906 20:21:25.681160  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:26.178235  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:26.178261  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:26.178272  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:26.178279  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:26.180821  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:26.180848  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:26.180858  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:26.180865  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:26.180872  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:26 GMT
	I0906 20:21:26.180879  721676 round_trippers.go:580]     Audit-Id: ccb713a7-5f3f-4781-84b8-7bc25ec87916
	I0906 20:21:26.180889  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:26.180896  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:26.181034  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:26.678125  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:26.678149  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:26.678160  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:26.678168  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:26.680681  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:26.680702  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:26.680711  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:26.680718  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:26.680725  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:26.680731  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:26.680738  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:26 GMT
	I0906 20:21:26.680744  721676 round_trippers.go:580]     Audit-Id: 3b35dcc6-0e0e-47cb-9a7b-d4e313ab7b04
	I0906 20:21:26.680859  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:27.179055  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:27.179082  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:27.179093  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:27.179101  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:27.181675  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:27.181704  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:27.181714  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:27 GMT
	I0906 20:21:27.181721  721676 round_trippers.go:580]     Audit-Id: 728fe14d-5f2e-4874-8bec-f567378fb331
	I0906 20:21:27.181728  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:27.181735  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:27.181743  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:27.181750  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:27.181923  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:27.182324  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:27.678185  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:27.678209  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:27.678219  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:27.678227  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:27.680789  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:27.680809  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:27.680818  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:27.680826  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:27.680833  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:27 GMT
	I0906 20:21:27.680839  721676 round_trippers.go:580]     Audit-Id: ca51803d-f364-46d2-9397-ea210d601a3e
	I0906 20:21:27.680846  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:27.680853  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:27.680943  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:28.179145  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:28.179166  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:28.179176  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:28.179184  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:28.181741  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:28.181761  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:28.181770  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:28.181778  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:28 GMT
	I0906 20:21:28.181784  721676 round_trippers.go:580]     Audit-Id: 65b02848-c336-4759-898d-ce4e7c927d3a
	I0906 20:21:28.181791  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:28.181797  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:28.181804  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:28.181930  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:28.679024  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:28.679050  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:28.679061  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:28.679068  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:28.681562  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:28.681590  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:28.681599  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:28.681606  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:28.681612  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:28 GMT
	I0906 20:21:28.681619  721676 round_trippers.go:580]     Audit-Id: 816c6ccc-6570-46e3-87bb-9ce0e36a8fb3
	I0906 20:21:28.681625  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:28.681632  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:28.681745  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:29.179104  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:29.179125  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:29.179135  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:29.179142  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:29.181564  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:29.181588  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:29.181597  721676 round_trippers.go:580]     Audit-Id: 2c503944-6714-433d-9341-65b7e40e9b0a
	I0906 20:21:29.181604  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:29.181611  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:29.181618  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:29.181624  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:29.181636  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:29 GMT
	I0906 20:21:29.181736  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:29.678958  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:29.678989  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:29.679000  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:29.679007  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:29.681441  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:29.681464  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:29.681473  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:29.681480  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:29.681487  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:29 GMT
	I0906 20:21:29.681495  721676 round_trippers.go:580]     Audit-Id: f73cd12f-42eb-4d6d-9e22-9502775389e3
	I0906 20:21:29.681502  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:29.681509  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:29.681607  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:29.681996  721676 node_ready.go:58] node "multinode-782472-m02" has status "Ready":"False"
	I0906 20:21:30.178800  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:30.178822  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.178832  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.178839  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.183794  721676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 20:21:30.183826  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.183836  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.183844  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.183851  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.183860  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.183866  721676 round_trippers.go:580]     Audit-Id: 0f1f244d-070f-4a35-88ba-ed21ea45f7a5
	I0906 20:21:30.183873  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.184020  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"466","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0906 20:21:30.679196  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:30.679235  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.679245  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.679262  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.681815  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.681844  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.681854  721676 round_trippers.go:580]     Audit-Id: 0d2a9ae9-9be4-43b8-865d-b4c8808929a2
	I0906 20:21:30.681863  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.681869  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.681876  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.681882  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.681889  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.682010  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"489","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0906 20:21:30.682405  721676 node_ready.go:49] node "multinode-782472-m02" has status "Ready":"True"
	I0906 20:21:30.682424  721676 node_ready.go:38] duration metric: took 31.013504609s waiting for node "multinode-782472-m02" to be "Ready" ...
	I0906 20:21:30.682433  721676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:21:30.682501  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0906 20:21:30.682513  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.682522  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.682529  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.686191  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:21:30.686221  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.686231  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.686239  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.686245  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.686253  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.686259  721676 round_trippers.go:580]     Audit-Id: b26df227-2a58-4288-98eb-cc3a10ec544a
	I0906 20:21:30.686267  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.686995  721676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"399","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0906 20:21:30.689893  721676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79759" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.689988  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-79759
	I0906 20:21:30.689998  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.690008  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.690018  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.692623  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.692642  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.692651  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.692657  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.692664  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.692671  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.692677  721676 round_trippers.go:580]     Audit-Id: 02963a35-2bea-4953-b657-a341bfa330d7
	I0906 20:21:30.692683  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.692826  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-79759","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b492a232-9d20-4012-8a94-0ff7eca50db6","resourceVersion":"399","creationTimestamp":"2023-09-06T20:20:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5b61ad83-6adc-400b-813b-0fdf43f24858","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b61ad83-6adc-400b-813b-0fdf43f24858\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0906 20:21:30.693327  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:30.693335  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.693342  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.693349  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.695762  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.695824  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.695848  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.695886  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.695905  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.695941  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.695949  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.695956  721676 round_trippers.go:580]     Audit-Id: 1caaf2a9-e209-436d-a4a8-7f5cda3fba14
	I0906 20:21:30.696097  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:30.696499  721676 pod_ready.go:92] pod "coredns-5dd5756b68-79759" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:30.696516  721676 pod_ready.go:81] duration metric: took 6.59314ms waiting for pod "coredns-5dd5756b68-79759" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.696527  721676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.696586  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-782472
	I0906 20:21:30.696597  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.696606  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.696613  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.699019  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.699039  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.699048  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.699055  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.699061  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.699068  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.699074  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.699081  721676 round_trippers.go:580]     Audit-Id: 7a674361-2e3c-4cfb-adec-a837f73114fa
	I0906 20:21:30.699189  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-782472","namespace":"kube-system","uid":"c7fbee74-f36a-435f-b4eb-9e01833854a3","resourceVersion":"290","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"dfaa62571a1327eee1c536a3243dc8f3","kubernetes.io/config.mirror":"dfaa62571a1327eee1c536a3243dc8f3","kubernetes.io/config.seen":"2023-09-06T20:20:24.185455730Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0906 20:21:30.699634  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:30.699642  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.699651  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.699659  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.701969  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.701988  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.701997  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.702003  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.702010  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.702017  721676 round_trippers.go:580]     Audit-Id: 18410b80-3c0d-44ac-a0d5-286cf9310e61
	I0906 20:21:30.702023  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.702030  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.702172  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:30.702560  721676 pod_ready.go:92] pod "etcd-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:30.702571  721676 pod_ready.go:81] duration metric: took 6.037489ms waiting for pod "etcd-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.702588  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.702646  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-782472
	I0906 20:21:30.702651  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.702658  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.702665  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.705006  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.705095  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.705123  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.705144  721676 round_trippers.go:580]     Audit-Id: e5f25cd6-cdef-4a73-921a-4b9a80375e0b
	I0906 20:21:30.705175  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.705183  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.705190  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.705197  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.705325  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-782472","namespace":"kube-system","uid":"8d109f5d-3d07-4d57-bb86-5144199cf5e8","resourceVersion":"260","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d8cb9b6609c14b0204af7167dd8050e9","kubernetes.io/config.mirror":"d8cb9b6609c14b0204af7167dd8050e9","kubernetes.io/config.seen":"2023-09-06T20:20:24.185460022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0906 20:21:30.705900  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:30.705920  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.705929  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.705942  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.708290  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.708314  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.708323  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.708330  721676 round_trippers.go:580]     Audit-Id: 76779e1b-5c6b-44db-a433-817f07f6e374
	I0906 20:21:30.708337  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.708344  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.708354  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.708369  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.708473  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:30.708863  721676 pod_ready.go:92] pod "kube-apiserver-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:30.708879  721676 pod_ready.go:81] duration metric: took 6.283807ms waiting for pod "kube-apiserver-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.708890  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.708962  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-782472
	I0906 20:21:30.708973  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.708981  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.708988  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.711416  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.711441  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.711450  721676 round_trippers.go:580]     Audit-Id: 6cc40444-49ee-40a0-ba4e-16e29735182a
	I0906 20:21:30.711457  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.711463  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.711470  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.711477  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.711496  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.711646  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-782472","namespace":"kube-system","uid":"67462036-1f86-4cd8-8872-e0f7c61eec13","resourceVersion":"263","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fae94c75f3d99d8053cd41b1188a79cb","kubernetes.io/config.mirror":"fae94c75f3d99d8053cd41b1188a79cb","kubernetes.io/config.seen":"2023-09-06T20:20:24.185461507Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0906 20:21:30.712146  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:30.712161  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.712169  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.712177  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.714439  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.714463  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.714473  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.714480  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.714486  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.714496  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.714503  721676 round_trippers.go:580]     Audit-Id: 5b0a83a0-0e8b-4341-b24e-c492d50d5c23
	I0906 20:21:30.714511  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.714612  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:30.714987  721676 pod_ready.go:92] pod "kube-controller-manager-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:30.715002  721676 pod_ready.go:81] duration metric: took 6.104492ms waiting for pod "kube-controller-manager-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.715012  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lhjnq" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:30.879320  721676 request.go:629] Waited for 164.239656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhjnq
	I0906 20:21:30.879426  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhjnq
	I0906 20:21:30.879436  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:30.879447  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:30.879454  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:30.881959  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:30.881986  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:30.881995  721676 round_trippers.go:580]     Audit-Id: e3027311-eb94-4e82-ab6c-ee54b74b996e
	I0906 20:21:30.882002  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:30.882011  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:30.882018  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:30.882024  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:30.882032  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:30 GMT
	I0906 20:21:30.882173  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lhjnq","generateName":"kube-proxy-","namespace":"kube-system","uid":"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff","resourceVersion":"385","creationTimestamp":"2023-09-06T20:20:36Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1ecde341-d8a6-4231-a369-8815db31017a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ecde341-d8a6-4231-a369-8815db31017a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0906 20:21:31.080058  721676 request.go:629] Waited for 197.333737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:31.080126  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:31.080137  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:31.080147  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:31.080158  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:31.083094  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:31.083127  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:31.083141  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:31 GMT
	I0906 20:21:31.083158  721676 round_trippers.go:580]     Audit-Id: b2bb9f65-ee88-4518-b97c-41a46da084eb
	I0906 20:21:31.083165  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:31.083172  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:31.083183  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:31.083190  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:31.083326  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:31.083755  721676 pod_ready.go:92] pod "kube-proxy-lhjnq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:31.083774  721676 pod_ready.go:81] duration metric: took 368.752895ms waiting for pod "kube-proxy-lhjnq" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:31.083788  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z87gw" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:31.280184  721676 request.go:629] Waited for 196.331925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z87gw
	I0906 20:21:31.280247  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z87gw
	I0906 20:21:31.280256  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:31.280280  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:31.280294  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:31.282776  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:31.282839  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:31.282862  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:31.282886  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:31 GMT
	I0906 20:21:31.282923  721676 round_trippers.go:580]     Audit-Id: 1b975bed-4c84-431c-a966-08da34072909
	I0906 20:21:31.282949  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:31.282963  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:31.282971  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:31.283103  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z87gw","generateName":"kube-proxy-","namespace":"kube-system","uid":"c65306f8-0977-4e56-93bd-03fc71f4f8ae","resourceVersion":"453","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1ecde341-d8a6-4231-a369-8815db31017a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ecde341-d8a6-4231-a369-8815db31017a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0906 20:21:31.479916  721676 request.go:629] Waited for 196.334764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:31.479972  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472-m02
	I0906 20:21:31.479978  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:31.479994  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:31.480005  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:31.482552  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:31.482620  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:31.482663  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:31.482688  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:31.482703  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:31.482710  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:31 GMT
	I0906 20:21:31.482717  721676 round_trippers.go:580]     Audit-Id: aeb74c6c-7a87-4b19-9c80-fd7226f0b1d6
	I0906 20:21:31.482736  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:31.482870  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472-m02","uid":"b1ba6b1b-3a9d-4f34-9813-0bcc62902332","resourceVersion":"490","creationTimestamp":"2023-09-06T20:20:58Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I0906 20:21:31.483247  721676 pod_ready.go:92] pod "kube-proxy-z87gw" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:31.483262  721676 pod_ready.go:81] duration metric: took 399.467928ms waiting for pod "kube-proxy-z87gw" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:31.483272  721676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:31.679631  721676 request.go:629] Waited for 196.292049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-782472
	I0906 20:21:31.679738  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-782472
	I0906 20:21:31.679749  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:31.679759  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:31.679781  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:31.682971  721676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 20:21:31.683031  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:31.683054  721676 round_trippers.go:580]     Audit-Id: 4ce874dc-354b-423b-b3d4-161037f626cc
	I0906 20:21:31.683077  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:31.683099  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:31.683120  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:31.683159  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:31.683174  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:31 GMT
	I0906 20:21:31.683305  721676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-782472","namespace":"kube-system","uid":"8841f830-f4c4-4cac-8265-3da8e1d4c90c","resourceVersion":"281","creationTimestamp":"2023-09-06T20:20:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0a0873e0992ce9209a5f971960d459b8","kubernetes.io/config.mirror":"0a0873e0992ce9209a5f971960d459b8","kubernetes.io/config.seen":"2023-09-06T20:20:24.185462417Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-06T20:20:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0906 20:21:31.879967  721676 request.go:629] Waited for 196.182749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:31.880050  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-782472
	I0906 20:21:31.880063  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:31.880073  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:31.880084  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:31.882765  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:31.882786  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:31.882795  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:31.882802  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:31.882809  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:31.882815  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:31.882824  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:31 GMT
	I0906 20:21:31.882831  721676 round_trippers.go:580]     Audit-Id: 7c89485b-7258-4e01-85ef-4888d7f7df51
	I0906 20:21:31.882942  721676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-06T20:20:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0906 20:21:31.883331  721676 pod_ready.go:92] pod "kube-scheduler-multinode-782472" in "kube-system" namespace has status "Ready":"True"
	I0906 20:21:31.883353  721676 pod_ready.go:81] duration metric: took 400.065515ms waiting for pod "kube-scheduler-multinode-782472" in "kube-system" namespace to be "Ready" ...
	I0906 20:21:31.883371  721676 pod_ready.go:38] duration metric: took 1.200922336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:21:31.883391  721676 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:21:31.883447  721676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:21:31.897605  721676 system_svc.go:56] duration metric: took 14.203959ms WaitForService to wait for kubelet.
	I0906 20:21:31.897634  721676 kubeadm.go:581] duration metric: took 32.26323887s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:21:31.897658  721676 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:21:32.079976  721676 request.go:629] Waited for 182.223877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0906 20:21:32.080035  721676 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0906 20:21:32.080041  721676 round_trippers.go:469] Request Headers:
	I0906 20:21:32.080050  721676 round_trippers.go:473]     Accept: application/json, */*
	I0906 20:21:32.080064  721676 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0906 20:21:32.082803  721676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 20:21:32.082827  721676 round_trippers.go:577] Response Headers:
	I0906 20:21:32.082839  721676 round_trippers.go:580]     Audit-Id: 81fa4fa9-7ce7-4265-b3e5-8a2a9501f9c6
	I0906 20:21:32.082853  721676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 20:21:32.082870  721676 round_trippers.go:580]     Content-Type: application/json
	I0906 20:21:32.082877  721676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: af199d84-d04f-4ba9-b66c-35c570999142
	I0906 20:21:32.082891  721676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a955-0d15-42b4-89d3-c6421e0c4075
	I0906 20:21:32.082903  721676 round_trippers.go:580]     Date: Wed, 06 Sep 2023 20:21:32 GMT
	I0906 20:21:32.087814  721676 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"491"},"items":[{"metadata":{"name":"multinode-782472","uid":"d4d9310d-3b79-4394-89ea-7c8cc779c9a8","resourceVersion":"377","creationTimestamp":"2023-09-06T20:20:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-782472","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138","minikube.k8s.io/name":"multinode-782472","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_06T20_20_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I0906 20:21:32.088577  721676 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:21:32.088600  721676 node_conditions.go:123] node cpu capacity is 2
	I0906 20:21:32.088612  721676 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:21:32.088617  721676 node_conditions.go:123] node cpu capacity is 2
	I0906 20:21:32.088629  721676 node_conditions.go:105] duration metric: took 190.965698ms to run NodePressure ...
	I0906 20:21:32.088643  721676 start.go:228] waiting for startup goroutines ...
	I0906 20:21:32.088673  721676 start.go:242] writing updated cluster config ...
	I0906 20:21:32.089038  721676 ssh_runner.go:195] Run: rm -f paused
	I0906 20:21:32.155034  721676 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 20:21:32.158270  721676 out.go:177] * Done! kubectl is now configured to use "multinode-782472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 06 20:20:39 multinode-782472 crio[900]: time="2023-09-06 20:20:39.699299198Z" level=info msg="Starting container: d5c86fd34d09363453d96121dd937c85fab5c93306f7ad2c8d683c8da94d8f4b" id=615832fc-ba8a-45d8-bf7f-e877b09af82a name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:20:39 multinode-782472 crio[900]: time="2023-09-06 20:20:39.707550805Z" level=info msg="Created container 7910937918ca674f0e8fbf9b5949f71dfdc41ae838f9ee65d633776aef230f59: kube-system/coredns-5dd5756b68-79759/coredns" id=76d01d94-8a2d-4d6f-a531-a92ca73a4a1a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:20:39 multinode-782472 crio[900]: time="2023-09-06 20:20:39.708396670Z" level=info msg="Starting container: 7910937918ca674f0e8fbf9b5949f71dfdc41ae838f9ee65d633776aef230f59" id=43dd103e-a100-4fcb-8325-ff4ee2cc7166 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:20:39 multinode-782472 crio[900]: time="2023-09-06 20:20:39.718634596Z" level=info msg="Started container" PID=1951 containerID=d5c86fd34d09363453d96121dd937c85fab5c93306f7ad2c8d683c8da94d8f4b description=kube-system/storage-provisioner/storage-provisioner id=615832fc-ba8a-45d8-bf7f-e877b09af82a name=/runtime.v1.RuntimeService/StartContainer sandboxID=214cb7f32a06f5ce49f16d7035ff343e2aa9da4987521cc03cb12aaeac8133bb
	Sep 06 20:20:39 multinode-782472 crio[900]: time="2023-09-06 20:20:39.729807921Z" level=info msg="Started container" PID=1958 containerID=7910937918ca674f0e8fbf9b5949f71dfdc41ae838f9ee65d633776aef230f59 description=kube-system/coredns-5dd5756b68-79759/coredns id=43dd103e-a100-4fcb-8325-ff4ee2cc7166 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2c4b3d54292e93eec782e2cf0015ac78247600492096ce3cefd0c3e2a7cdba4
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.424305615Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-pwl5s/POD" id=6a2c552c-27b5-468b-a035-19967c2e2d8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.424377049Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.441036142Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-pwl5s Namespace:default ID:18f5d573f2a478a8719e399246e2100d44e94d240362cffc417e63d8b367b174 UID:1531a28c-cfd7-470a-abf2-596eebb222de NetNS:/var/run/netns/b833d5f5-447f-43d1-8051-ffa33a874699 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.441074469Z" level=info msg="Adding pod default_busybox-5bc68d56bd-pwl5s to CNI network \"kindnet\" (type=ptp)"
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.453211362Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-pwl5s Namespace:default ID:18f5d573f2a478a8719e399246e2100d44e94d240362cffc417e63d8b367b174 UID:1531a28c-cfd7-470a-abf2-596eebb222de NetNS:/var/run/netns/b833d5f5-447f-43d1-8051-ffa33a874699 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.453370828Z" level=info msg="Checking pod default_busybox-5bc68d56bd-pwl5s for CNI network kindnet (type=ptp)"
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.459004576Z" level=info msg="Ran pod sandbox 18f5d573f2a478a8719e399246e2100d44e94d240362cffc417e63d8b367b174 with infra container: default/busybox-5bc68d56bd-pwl5s/POD" id=6a2c552c-27b5-468b-a035-19967c2e2d8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.461380589Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=92498974-f24b-4075-ba7d-9a4635984650 name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.461622254Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=92498974-f24b-4075-ba7d-9a4635984650 name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.462730962Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=451c0761-c6c3-498f-a6ef-7a68c85d03b8 name=/runtime.v1.ImageService/PullImage
	Sep 06 20:21:33 multinode-782472 crio[900]: time="2023-09-06 20:21:33.463738879Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 06 20:21:34 multinode-782472 crio[900]: time="2023-09-06 20:21:34.255552242Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.590887242Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=451c0761-c6c3-498f-a6ef-7a68c85d03b8 name=/runtime.v1.ImageService/PullImage
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.592270271Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=87385fc5-e155-4db0-abcc-9f9919f842a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.592960125Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=87385fc5-e155-4db0-abcc-9f9919f842a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.593941596Z" level=info msg="Creating container: default/busybox-5bc68d56bd-pwl5s/busybox" id=fd5ae4d7-0468-4aaf-8974-f79fa659314d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.594036045Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.676929251Z" level=info msg="Created container fa00f7deea5f14d64cdc11dd912faaaab3f319bf73acfef7f4345595fe89a52d: default/busybox-5bc68d56bd-pwl5s/busybox" id=fd5ae4d7-0468-4aaf-8974-f79fa659314d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.677724400Z" level=info msg="Starting container: fa00f7deea5f14d64cdc11dd912faaaab3f319bf73acfef7f4345595fe89a52d" id=1c217461-20b6-435c-85f6-b1ea1ba39fa2 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:21:35 multinode-782472 crio[900]: time="2023-09-06 20:21:35.688363975Z" level=info msg="Started container" PID=2098 containerID=fa00f7deea5f14d64cdc11dd912faaaab3f319bf73acfef7f4345595fe89a52d description=default/busybox-5bc68d56bd-pwl5s/busybox id=1c217461-20b6-435c-85f6-b1ea1ba39fa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18f5d573f2a478a8719e399246e2100d44e94d240362cffc417e63d8b367b174
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fa00f7deea5f1       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   18f5d573f2a47       busybox-5bc68d56bd-pwl5s
	7910937918ca6       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   c2c4b3d54292e       coredns-5dd5756b68-79759
	d5c86fd34d093       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   214cb7f32a06f       storage-provisioner
	fd1f67448f441       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   286613d710945       kindnet-whw4s
	c816ac42ce7f3       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                      About a minute ago   Running             kube-proxy                0                   a1c7693a27e94       kube-proxy-lhjnq
	47c92d00f898b       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                      About a minute ago   Running             kube-controller-manager   0                   cbaf0f9269341       kube-controller-manager-multinode-782472
	178ffe4d6e031       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                      About a minute ago   Running             kube-scheduler            0                   e9b85055fc6dc       kube-scheduler-multinode-782472
	a350439d7f6ed       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                      About a minute ago   Running             kube-apiserver            0                   a84428778ea39       kube-apiserver-multinode-782472
	bc371c84cbfc0       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   3f1e4e004b36e       etcd-multinode-782472
	
	* 
	* ==> coredns [7910937918ca674f0e8fbf9b5949f71dfdc41ae838f9ee65d633776aef230f59] <==
	* [INFO] 10.244.0.3:53941 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114511s
	[INFO] 10.244.1.2:58357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144475s
	[INFO] 10.244.1.2:45290 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001369459s
	[INFO] 10.244.1.2:40568 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107823s
	[INFO] 10.244.1.2:52846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068799s
	[INFO] 10.244.1.2:55366 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001162985s
	[INFO] 10.244.1.2:55277 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075135s
	[INFO] 10.244.1.2:53031 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082625s
	[INFO] 10.244.1.2:41379 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072344s
	[INFO] 10.244.0.3:32869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117514s
	[INFO] 10.244.0.3:39887 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000238564s
	[INFO] 10.244.0.3:59879 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070408s
	[INFO] 10.244.0.3:60053 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092578s
	[INFO] 10.244.1.2:46072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172865s
	[INFO] 10.244.1.2:47066 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088582s
	[INFO] 10.244.1.2:49441 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093661s
	[INFO] 10.244.1.2:34867 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007474s
	[INFO] 10.244.0.3:36966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080221s
	[INFO] 10.244.0.3:40595 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154576s
	[INFO] 10.244.0.3:37859 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109498s
	[INFO] 10.244.0.3:56638 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137895s
	[INFO] 10.244.1.2:48843 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153969s
	[INFO] 10.244.1.2:34375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070875s
	[INFO] 10.244.1.2:51584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074971s
	[INFO] 10.244.1.2:54280 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007095s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-782472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-782472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=multinode-782472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T20_20_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 20:20:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-782472
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:21:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:20:39 +0000   Wed, 06 Sep 2023 20:20:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:20:39 +0000   Wed, 06 Sep 2023 20:20:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:20:39 +0000   Wed, 06 Sep 2023 20:20:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:20:39 +0000   Wed, 06 Sep 2023 20:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-782472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 b49f2fdb51b7407d86106a33a8fb30ba
	  System UUID:                2236c56d-2ef3-4237-a1aa-772c7e0857d9
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pwl5s                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-79759                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-multinode-782472                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-whw4s                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      65s
	  kube-system                 kube-apiserver-multinode-782472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-multinode-782472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-lhjnq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-multinode-782472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 62s   kube-proxy       
	  Normal  Starting                 77s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s   kubelet          Node multinode-782472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet          Node multinode-782472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet          Node multinode-782472 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s   node-controller  Node multinode-782472 event: Registered Node multinode-782472 in Controller
	  Normal  NodeReady                62s   kubelet          Node multinode-782472 status is now: NodeReady
	
	
	Name:               multinode-782472-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-782472-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 20:20:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-782472-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:21:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:21:30 +0000   Wed, 06 Sep 2023 20:20:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:21:30 +0000   Wed, 06 Sep 2023 20:20:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:21:30 +0000   Wed, 06 Sep 2023 20:20:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:21:30 +0000   Wed, 06 Sep 2023 20:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-782472-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 59b81217f0eb42078ec28fcadc4b543e
	  System UUID:                a0b4d1f6-2fee-4643-a163-74587920af1d
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-thpl6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-4wmpx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-z87gw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-782472-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-782472-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-782472-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-782472-m02 event: Registered Node multinode-782472-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-782472-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001160] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +0.002380] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000f3c7fb8d
	[  +0.001073] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000829] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000050869d71
	[  +0.001077] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +2.999130] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000da17136c
	[  +0.001217] FS-Cache: O-key=[8] '95d3c90000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001133] FS-Cache: N-key=[8] '95d3c90000000000'
	[  +0.318024] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=000000003cc11187
	[  +0.001164] FS-Cache: O-key=[8] '9bd3c90000000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=00000000302c6dfe
	[  +0.001111] FS-Cache: N-key=[8] '9bd3c90000000000'
	
	* 
	* ==> etcd [bc371c84cbfc0b0d54b437a237d548aa34ffc87e72d8d71504bbae33965e8628] <==
	* {"level":"info","ts":"2023-09-06T20:20:16.933054Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:20:16.933117Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:20:16.933168Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:20:16.933685Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-06T20:20:16.933746Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-06T20:20:16.938362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-06T20:20:16.938465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-06T20:20:17.298088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-06T20:20:17.298141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-06T20:20:17.298159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-06T20:20:17.298172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-06T20:20:17.29818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-06T20:20:17.298189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-06T20:20:17.298197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-06T20:20:17.302193Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:20:17.306221Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-782472 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T20:20:17.310067Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:20:17.310111Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:20:17.310187Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:20:17.310217Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:20:17.310235Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:20:17.311247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T20:20:17.311669Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T20:20:17.311693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T20:20:17.323008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:21:41 up  3:00,  0 users,  load average: 2.21, 2.25, 1.85
	Linux multinode-782472 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fd1f67448f4412a3888bd9c6da07dbc42124633c8ef259368bbc8b6ef5012e4d] <==
	* I0906 20:20:38.725077       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:20:38.725111       1 main.go:227] handling current node
	I0906 20:20:48.831510       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:20:48.831623       1 main.go:227] handling current node
	I0906 20:20:58.850915       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:20:58.850944       1 main.go:227] handling current node
	I0906 20:20:58.850955       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0906 20:20:58.850961       1 main.go:250] Node multinode-782472-m02 has CIDR [10.244.1.0/24] 
	I0906 20:20:58.851083       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0906 20:21:08.854931       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:21:08.854964       1 main.go:227] handling current node
	I0906 20:21:08.854974       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0906 20:21:08.854980       1 main.go:250] Node multinode-782472-m02 has CIDR [10.244.1.0/24] 
	I0906 20:21:18.866038       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:21:18.866100       1 main.go:227] handling current node
	I0906 20:21:18.866111       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0906 20:21:18.866117       1 main.go:250] Node multinode-782472-m02 has CIDR [10.244.1.0/24] 
	I0906 20:21:28.870782       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:21:28.870815       1 main.go:227] handling current node
	I0906 20:21:28.870827       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0906 20:21:28.870833       1 main.go:250] Node multinode-782472-m02 has CIDR [10.244.1.0/24] 
	I0906 20:21:38.883083       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0906 20:21:38.883205       1 main.go:227] handling current node
	I0906 20:21:38.883238       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0906 20:21:38.883300       1 main.go:250] Node multinode-782472-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [a350439d7f6ed2881f0197afa7a0f3e64a25ee036881df5310c6b74723dfe955] <==
	* I0906 20:20:20.971145       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0906 20:20:20.971159       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 20:20:20.976901       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 20:20:20.977278       1 aggregator.go:166] initial CRD sync complete...
	I0906 20:20:20.977298       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 20:20:20.977307       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 20:20:20.977314       1 cache.go:39] Caches are synced for autoregister controller
	I0906 20:20:20.992103       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 20:20:20.998332       1 controller.go:624] quota admission added evaluator for: namespaces
	I0906 20:20:21.069882       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 20:20:21.775924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0906 20:20:21.782572       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 20:20:21.782597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 20:20:22.302282       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 20:20:22.348477       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 20:20:22.415372       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0906 20:20:22.421971       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0906 20:20:22.423103       1 controller.go:624] quota admission added evaluator for: endpoints
	I0906 20:20:22.427275       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 20:20:22.937717       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 20:20:24.109597       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 20:20:24.126558       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0906 20:20:24.137207       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0906 20:20:36.761504       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0906 20:20:36.910883       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [47c92d00f898bf5f7c190fe91cf5bd3f1deda1cf138a82d4939e884660577687] <==
	* I0906 20:20:37.474757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="177.805µs"
	I0906 20:20:39.248558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.994µs"
	I0906 20:20:39.262087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.078µs"
	I0906 20:20:40.341947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.75µs"
	I0906 20:20:40.381067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.523841ms"
	I0906 20:20:40.381244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.393µs"
	I0906 20:20:41.207089       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0906 20:20:58.615983       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-782472-m02\" does not exist"
	I0906 20:20:58.638767       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-782472-m02" podCIDRs=["10.244.1.0/24"]
	I0906 20:20:58.648798       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4wmpx"
	I0906 20:20:58.648903       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z87gw"
	I0906 20:21:01.210144       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-782472-m02"
	I0906 20:21:01.210265       1 event.go:307] "Event occurred" object="multinode-782472-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-782472-m02 event: Registered Node multinode-782472-m02 in Controller"
	I0906 20:21:30.194730       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-782472-m02"
	I0906 20:21:33.063956       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0906 20:21:33.079813       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-thpl6"
	I0906 20:21:33.100034       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pwl5s"
	I0906 20:21:33.117447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.259278ms"
	I0906 20:21:33.157073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.549752ms"
	I0906 20:21:33.157176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.354µs"
	I0906 20:21:33.164344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="95.557µs"
	I0906 20:21:36.302549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.696066ms"
	I0906 20:21:36.302677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.133µs"
	I0906 20:21:36.454709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.455269ms"
	I0906 20:21:36.455425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.825µs"
	
	* 
	* ==> kube-proxy [c816ac42ce7f3ec44906b732621cbe5382cc8e57c0260d8a45c8663bd975bbdd] <==
	* I0906 20:20:38.396471       1 server_others.go:69] "Using iptables proxy"
	I0906 20:20:38.412053       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0906 20:20:38.441922       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 20:20:38.444178       1 server_others.go:152] "Using iptables Proxier"
	I0906 20:20:38.444274       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0906 20:20:38.444305       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0906 20:20:38.444410       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 20:20:38.444685       1 server.go:846] "Version info" version="v1.28.1"
	I0906 20:20:38.444747       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:20:38.445690       1 config.go:188] "Starting service config controller"
	I0906 20:20:38.445774       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 20:20:38.445834       1 config.go:97] "Starting endpoint slice config controller"
	I0906 20:20:38.445874       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 20:20:38.446608       1 config.go:315] "Starting node config controller"
	I0906 20:20:38.446666       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 20:20:38.546846       1 shared_informer.go:318] Caches are synced for node config
	I0906 20:20:38.546854       1 shared_informer.go:318] Caches are synced for service config
	I0906 20:20:38.546909       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [178ffe4d6e03146bde7e506d7cb806bf835735f35ba07931bfe020fc16a715e5] <==
	* W0906 20:20:21.047517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:20:21.048372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 20:20:21.049786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 20:20:21.050104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 20:20:21.049967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:20:21.050215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 20:20:21.050024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:20:21.050300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 20:20:21.870969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:20:21.871003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 20:20:21.884538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 20:20:21.884641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 20:20:21.904848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:20:21.904889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 20:20:22.005300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:20:22.005425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 20:20:22.028023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:20:22.028133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 20:20:22.064198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:20:22.064237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 20:20:22.081352       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:20:22.081388       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 20:20:22.115175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 20:20:22.115212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0906 20:20:24.938172       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 06 20:20:36 multinode-782472 kubelet[1398]: W0906 20:20:36.939988    1398 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-782472" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-782472' and this object
	Sep 06 20:20:36 multinode-782472 kubelet[1398]: E0906 20:20:36.940038    1398 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-782472" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-782472' and this object
	Sep 06 20:20:36 multinode-782472 kubelet[1398]: I0906 20:20:36.948905    1398 topology_manager.go:215] "Topology Admit Handler" podUID="92a15983-5281-4989-b838-0b61276da955" podNamespace="kube-system" podName="kindnet-whw4s"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014409    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eb21731-931d-41b6-a6d8-da9bb0d0d3ff-lib-modules\") pod \"kube-proxy-lhjnq\" (UID: \"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff\") " pod="kube-system/kube-proxy-lhjnq"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014487    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mt4\" (UniqueName: \"kubernetes.io/projected/2eb21731-931d-41b6-a6d8-da9bb0d0d3ff-kube-api-access-k7mt4\") pod \"kube-proxy-lhjnq\" (UID: \"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff\") " pod="kube-system/kube-proxy-lhjnq"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014516    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a15983-5281-4989-b838-0b61276da955-lib-modules\") pod \"kindnet-whw4s\" (UID: \"92a15983-5281-4989-b838-0b61276da955\") " pod="kube-system/kindnet-whw4s"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014540    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48cjw\" (UniqueName: \"kubernetes.io/projected/92a15983-5281-4989-b838-0b61276da955-kube-api-access-48cjw\") pod \"kindnet-whw4s\" (UID: \"92a15983-5281-4989-b838-0b61276da955\") " pod="kube-system/kindnet-whw4s"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014564    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2eb21731-931d-41b6-a6d8-da9bb0d0d3ff-kube-proxy\") pod \"kube-proxy-lhjnq\" (UID: \"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff\") " pod="kube-system/kube-proxy-lhjnq"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014588    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92a15983-5281-4989-b838-0b61276da955-cni-cfg\") pod \"kindnet-whw4s\" (UID: \"92a15983-5281-4989-b838-0b61276da955\") " pod="kube-system/kindnet-whw4s"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014611    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eb21731-931d-41b6-a6d8-da9bb0d0d3ff-xtables-lock\") pod \"kube-proxy-lhjnq\" (UID: \"2eb21731-931d-41b6-a6d8-da9bb0d0d3ff\") " pod="kube-system/kube-proxy-lhjnq"
	Sep 06 20:20:37 multinode-782472 kubelet[1398]: I0906 20:20:37.014636    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a15983-5281-4989-b838-0b61276da955-xtables-lock\") pod \"kindnet-whw4s\" (UID: \"92a15983-5281-4989-b838-0b61276da955\") " pod="kube-system/kindnet-whw4s"
	Sep 06 20:20:38 multinode-782472 kubelet[1398]: W0906 20:20:38.175440    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/crio-a1c7693a27e94026c427a426e640909dbb48301956ac3bb30a285e79eb6f6bad WatchSource:0}: Error finding container a1c7693a27e94026c427a426e640909dbb48301956ac3bb30a285e79eb6f6bad: Status 404 returned error can't find the container with id a1c7693a27e94026c427a426e640909dbb48301956ac3bb30a285e79eb6f6bad
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.222101    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.247914    1398 topology_manager.go:215] "Topology Admit Handler" podUID="b492a232-9d20-4012-8a94-0ff7eca50db6" podNamespace="kube-system" podName="coredns-5dd5756b68-79759"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.252789    1398 topology_manager.go:215] "Topology Admit Handler" podUID="7d968ea2-93d4-4741-8b34-e531ffe5a253" podNamespace="kube-system" podName="storage-provisioner"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.350507    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-whw4s" podStartSLOduration=3.3504541310000002 podCreationTimestamp="2023-09-06 20:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 20:20:39.336005053 +0000 UTC m=+15.263902989" watchObservedRunningTime="2023-09-06 20:20:39.350454131 +0000 UTC m=+15.278352076"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.351001    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhjnq" podStartSLOduration=3.35096901 podCreationTimestamp="2023-09-06 20:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 20:20:39.35015133 +0000 UTC m=+15.278049283" watchObservedRunningTime="2023-09-06 20:20:39.35096901 +0000 UTC m=+15.278866947"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.427858    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b492a232-9d20-4012-8a94-0ff7eca50db6-config-volume\") pod \"coredns-5dd5756b68-79759\" (UID: \"b492a232-9d20-4012-8a94-0ff7eca50db6\") " pod="kube-system/coredns-5dd5756b68-79759"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.427923    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d968ea2-93d4-4741-8b34-e531ffe5a253-tmp\") pod \"storage-provisioner\" (UID: \"7d968ea2-93d4-4741-8b34-e531ffe5a253\") " pod="kube-system/storage-provisioner"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.427952    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k94c\" (UniqueName: \"kubernetes.io/projected/b492a232-9d20-4012-8a94-0ff7eca50db6-kube-api-access-9k94c\") pod \"coredns-5dd5756b68-79759\" (UID: \"b492a232-9d20-4012-8a94-0ff7eca50db6\") " pod="kube-system/coredns-5dd5756b68-79759"
	Sep 06 20:20:39 multinode-782472 kubelet[1398]: I0906 20:20:39.427976    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7bb\" (UniqueName: \"kubernetes.io/projected/7d968ea2-93d4-4741-8b34-e531ffe5a253-kube-api-access-2b7bb\") pod \"storage-provisioner\" (UID: \"7d968ea2-93d4-4741-8b34-e531ffe5a253\") " pod="kube-system/storage-provisioner"
	Sep 06 20:20:40 multinode-782472 kubelet[1398]: I0906 20:20:40.357313    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-79759" podStartSLOduration=3.357269914 podCreationTimestamp="2023-09-06 20:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 20:20:40.341562048 +0000 UTC m=+16.269459993" watchObservedRunningTime="2023-09-06 20:20:40.357269914 +0000 UTC m=+16.285167859"
	Sep 06 20:21:33 multinode-782472 kubelet[1398]: I0906 20:21:33.122924    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=55.122879432 podCreationTimestamp="2023-09-06 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 20:20:40.379617977 +0000 UTC m=+16.307515930" watchObservedRunningTime="2023-09-06 20:21:33.122879432 +0000 UTC m=+69.050777378"
	Sep 06 20:21:33 multinode-782472 kubelet[1398]: I0906 20:21:33.123168    1398 topology_manager.go:215] "Topology Admit Handler" podUID="1531a28c-cfd7-470a-abf2-596eebb222de" podNamespace="default" podName="busybox-5bc68d56bd-pwl5s"
	Sep 06 20:21:33 multinode-782472 kubelet[1398]: I0906 20:21:33.209171    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84gz8\" (UniqueName: \"kubernetes.io/projected/1531a28c-cfd7-470a-abf2-596eebb222de-kube-api-access-84gz8\") pod \"busybox-5bc68d56bd-pwl5s\" (UID: \"1531a28c-cfd7-470a-abf2-596eebb222de\") " pod="default/busybox-5bc68d56bd-pwl5s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-782472 -n multinode-782472
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-782472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.3132456921.exe start -p running-upgrade-935044 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0906 20:37:55.492917  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.3132456921.exe start -p running-upgrade-935044 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.196449089s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-935044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-935044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.007871822s)

                                                
                                                
-- stdout --
	* [running-upgrade-935044] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-935044 in cluster running-upgrade-935044
	* Pulling base image ...
	* Updating the running docker "running-upgrade-935044" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:38:50.526358  787697 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:38:50.526568  787697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:38:50.526596  787697 out.go:309] Setting ErrFile to fd 2...
	I0906 20:38:50.526617  787697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:38:50.526903  787697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:38:50.527315  787697 out.go:303] Setting JSON to false
	I0906 20:38:50.529533  787697 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11885,"bootTime":1694020846,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:38:50.529653  787697 start.go:138] virtualization:  
	I0906 20:38:50.532356  787697 out.go:177] * [running-upgrade-935044] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:38:50.534563  787697 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:38:50.536098  787697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:38:50.534724  787697 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0906 20:38:50.534799  787697 notify.go:220] Checking for updates...
	I0906 20:38:50.540170  787697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:38:50.541997  787697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:38:50.543670  787697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:38:50.545232  787697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:38:50.547975  787697 config.go:182] Loaded profile config "running-upgrade-935044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:38:50.552154  787697 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0906 20:38:50.555880  787697 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:38:50.592773  787697 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:38:50.592990  787697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:38:50.700424  787697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-06 20:38:50.688262149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:38:50.700537  787697 docker.go:294] overlay module found
	I0906 20:38:50.702538  787697 out.go:177] * Using the docker driver based on existing profile
	I0906 20:38:50.704667  787697 start.go:298] selected driver: docker
	I0906 20:38:50.704683  787697 start.go:902] validating driver "docker" against &{Name:running-upgrade-935044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-935044 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.133 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:38:50.704792  787697 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:38:50.705466  787697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:38:50.727463  787697 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0906 20:38:50.794419  787697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-06 20:38:50.783715691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:38:50.794757  787697 cni.go:84] Creating CNI manager for ""
	I0906 20:38:50.794773  787697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:38:50.794782  787697 start_flags.go:321] config:
	{Name:running-upgrade-935044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-935044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.133 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:38:50.797630  787697 out.go:177] * Starting control plane node running-upgrade-935044 in cluster running-upgrade-935044
	I0906 20:38:50.799421  787697 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:38:50.801107  787697 out.go:177] * Pulling base image ...
	I0906 20:38:50.803000  787697 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0906 20:38:50.803078  787697 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0906 20:38:50.821921  787697 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0906 20:38:50.821948  787697 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0906 20:38:50.878452  787697 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0906 20:38:50.878613  787697 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/running-upgrade-935044/config.json ...
	I0906 20:38:50.878723  787697 cache.go:107] acquiring lock: {Name:mk761ea5917e65ea5320237ae9d3fd919647d74d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.878807  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 20:38:50.878816  787697 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.88µs
	I0906 20:38:50.878847  787697 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 20:38:50.878855  787697 cache.go:107] acquiring lock: {Name:mk1a4e838c2ad274a72380629743f1b35f47dd39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.878891  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0906 20:38:50.878896  787697 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 41.584µs
	I0906 20:38:50.878902  787697 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0906 20:38:50.878909  787697 cache.go:107] acquiring lock: {Name:mkc27320f8e3da16932e91e3f74bf5d5b33dc664 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.878935  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0906 20:38:50.878940  787697 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.328µs
	I0906 20:38:50.878953  787697 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0906 20:38:50.878959  787697 cache.go:107] acquiring lock: {Name:mk53179198066eaf3115f5ed6bbe3ab3db1522c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.878978  787697 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:38:50.878988  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0906 20:38:50.878994  787697 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.282µs
	I0906 20:38:50.879000  787697 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0906 20:38:50.879007  787697 cache.go:107] acquiring lock: {Name:mk6a4b577aeafaa6ec13d04d8bb7a342c256843b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.879014  787697 start.go:365] acquiring machines lock for running-upgrade-935044: {Name:mk0c6c85e30bc4142c8f4af7fc93e57dc4f9208e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.879033  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0906 20:38:50.879038  787697 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.648µs
	I0906 20:38:50.879045  787697 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0906 20:38:50.879055  787697 start.go:369] acquired machines lock for "running-upgrade-935044" in 27.2µs
	I0906 20:38:50.879053  787697 cache.go:107] acquiring lock: {Name:mk22f096c6a91c8e67a172b4be8ed0577944fdba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.879069  787697 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:38:50.879075  787697 fix.go:54] fixHost starting: 
	I0906 20:38:50.879079  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0906 20:38:50.879083  787697 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.368µs
	I0906 20:38:50.879089  787697 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0906 20:38:50.879097  787697 cache.go:107] acquiring lock: {Name:mk627e07c0eeaa37b5facf9ad8431a66a5f5c500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.879121  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0906 20:38:50.879125  787697 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.48µs
	I0906 20:38:50.879130  787697 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0906 20:38:50.879140  787697 cache.go:107] acquiring lock: {Name:mk9a640a08153bc795cd4dd4cfaabc34e6d59789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:38:50.879164  787697 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0906 20:38:50.879168  787697 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 30.047µs
	I0906 20:38:50.879176  787697 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0906 20:38:50.879182  787697 cache.go:87] Successfully saved all images to host disk.
	I0906 20:38:50.879328  787697 cli_runner.go:164] Run: docker container inspect running-upgrade-935044 --format={{.State.Status}}
	I0906 20:38:50.898040  787697 fix.go:102] recreateIfNeeded on running-upgrade-935044: state=Running err=<nil>
	W0906 20:38:50.898118  787697 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:38:50.900425  787697 out.go:177] * Updating the running docker "running-upgrade-935044" container ...
	I0906 20:38:50.902312  787697 machine.go:88] provisioning docker machine ...
	I0906 20:38:50.902348  787697 ubuntu.go:169] provisioning hostname "running-upgrade-935044"
	I0906 20:38:50.902430  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:50.920661  787697 main.go:141] libmachine: Using SSH client type: native
	I0906 20:38:50.921137  787697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33608 <nil> <nil>}
	I0906 20:38:50.921158  787697 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-935044 && echo "running-upgrade-935044" | sudo tee /etc/hostname
	I0906 20:38:51.076229  787697 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-935044
	
	I0906 20:38:51.076309  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:51.098504  787697 main.go:141] libmachine: Using SSH client type: native
	I0906 20:38:51.098962  787697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33608 <nil> <nil>}
	I0906 20:38:51.098986  787697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-935044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-935044/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-935044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:38:51.244406  787697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:38:51.244428  787697 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:38:51.244448  787697 ubuntu.go:177] setting up certificates
	I0906 20:38:51.244457  787697 provision.go:83] configureAuth start
	I0906 20:38:51.244516  787697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-935044
	I0906 20:38:51.275815  787697 provision.go:138] copyHostCerts
	I0906 20:38:51.275877  787697 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:38:51.275886  787697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:38:51.275963  787697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:38:51.276059  787697 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:38:51.276064  787697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:38:51.276091  787697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:38:51.276154  787697 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:38:51.276159  787697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:38:51.276292  787697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:38:51.276364  787697 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-935044 san=[192.168.70.133 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-935044]
	I0906 20:38:51.760590  787697 provision.go:172] copyRemoteCerts
	I0906 20:38:51.760714  787697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:38:51.760771  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:51.784880  787697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33608 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/running-upgrade-935044/id_rsa Username:docker}
	I0906 20:38:51.889885  787697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:38:51.947475  787697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 20:38:51.976235  787697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:38:52.003419  787697 provision.go:86] duration metric: configureAuth took 758.952936ms
	I0906 20:38:52.003487  787697 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:38:52.003732  787697 config.go:182] Loaded profile config "running-upgrade-935044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:38:52.003865  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:52.038130  787697 main.go:141] libmachine: Using SSH client type: native
	I0906 20:38:52.038569  787697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33608 <nil> <nil>}
	I0906 20:38:52.038595  787697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:38:52.658649  787697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:38:52.658669  787697 machine.go:91] provisioned docker machine in 1.756339017s
	I0906 20:38:52.658680  787697 start.go:300] post-start starting for "running-upgrade-935044" (driver="docker")
	I0906 20:38:52.658689  787697 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:38:52.658750  787697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:38:52.658801  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:52.678272  787697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33608 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/running-upgrade-935044/id_rsa Username:docker}
	I0906 20:38:52.783042  787697 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:38:52.787160  787697 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:38:52.787184  787697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:38:52.787196  787697 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:38:52.787202  787697 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0906 20:38:52.787212  787697 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:38:52.787273  787697 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:38:52.787354  787697 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:38:52.787462  787697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:38:52.796407  787697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:38:52.820458  787697 start.go:303] post-start completed in 161.762576ms
	I0906 20:38:52.820539  787697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:38:52.820593  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:52.839849  787697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33608 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/running-upgrade-935044/id_rsa Username:docker}
	I0906 20:38:52.936168  787697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:38:52.942611  787697 fix.go:56] fixHost completed within 2.063528076s
	I0906 20:38:52.942634  787697 start.go:83] releasing machines lock for "running-upgrade-935044", held for 2.063571712s
	I0906 20:38:52.942726  787697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-935044
	I0906 20:38:52.964437  787697 ssh_runner.go:195] Run: cat /version.json
	I0906 20:38:52.964487  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:52.964824  787697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:38:52.964903  787697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-935044
	I0906 20:38:52.995516  787697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33608 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/running-upgrade-935044/id_rsa Username:docker}
	I0906 20:38:53.011226  787697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33608 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/running-upgrade-935044/id_rsa Username:docker}
	W0906 20:38:53.094394  787697 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 20:38:53.094609  787697 ssh_runner.go:195] Run: systemctl --version
	I0906 20:38:53.176582  787697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:38:53.291091  787697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:38:53.297335  787697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:38:53.322548  787697 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:38:53.322673  787697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:38:53.371405  787697 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:38:53.371480  787697 start.go:466] detecting cgroup driver to use...
	I0906 20:38:53.371530  787697 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:38:53.371614  787697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	W0906 20:38:53.411899  787697 cruntime.go:287] disable failed: sudo systemctl stop -f containerd: Process exited with status 1
	stdout:
	
	stderr:
	Job for containerd.service canceled.
	I0906 20:38:53.411985  787697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	W0906 20:38:53.428356  787697 crio.go:202] disableOthers: containerd is still active
	I0906 20:38:53.428536  787697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:38:53.450615  787697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:38:53.450708  787697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:38:53.469226  787697 out.go:177] 
	W0906 20:38:53.471044  787697 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0906 20:38:53.471075  787697 out.go:239] * 
	* 
	W0906 20:38:53.472156  787697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:38:53.473758  787697 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-935044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-06 20:38:53.498889714 +0000 UTC m=+2535.634502487
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-935044
helpers_test.go:235: (dbg) docker inspect running-upgrade-935044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123",
	        "Created": "2023-09-06T20:38:00.548182389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 784218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:38:00.963218498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123/hosts",
	        "LogPath": "/var/lib/docker/containers/ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123/ed4dc65247301bf9fe9e69c2bcc9d4e05064836484bca5e0d50e35463fe59123-json.log",
	        "Name": "/running-upgrade-935044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-935044:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-935044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/628a3552cb11112316232965c954561df9dc7b13451017267e49c5e748657eab-init/diff:/var/lib/docker/overlay2/3ca3844803c20261fffbd3abf87c36258201bdd8b720baafe53fb5f0e1cef2b2/diff:/var/lib/docker/overlay2/42f4fa8823ae920975ef4b3e77104e0ce5537ee0d647c4e560a7380c0dbef7ce/diff:/var/lib/docker/overlay2/4f4312fd1a6c349a6a0110f052579f981d52130f30c9a6b73eac5188cc2e6d39/diff:/var/lib/docker/overlay2/3e117e25284a6c23658700ed8040c9aab61a8c190c2ee6ad851e33caaee943dc/diff:/var/lib/docker/overlay2/06a9f3e13f8e054f47d37dbd717e9a6875582c5d40fe418be2a5f58c386bf224/diff:/var/lib/docker/overlay2/69a6bfd28c5dbbf4c3372a6021432ea658a5d47bc023bbf2fe7bd13dd5886351/diff:/var/lib/docker/overlay2/f9fe5a263fe11ece55dd4ee5567435e2ba4238ebd7115f6afd9f199dcacc06c3/diff:/var/lib/docker/overlay2/236b696cc98c9940476470340881a416040b87d49f549c85c4a10ba45f761b7f/diff:/var/lib/docker/overlay2/fbc634275957713c088d273f19a056241465381b555d462d547cb2331e7cd4e3/diff:/var/lib/docker/overlay2/df3266
fd6b2539a49ec8943079b4b4404c7ce7733bad61cfb979e8bcb9452938/diff:/var/lib/docker/overlay2/11bcd32f602b60a09d522c9c1b2adb997d93e760836c2b167cb3fe7013a17bde/diff:/var/lib/docker/overlay2/f1b8d3bf324890c8146bc377bf84f5fc2cd5dadc7a40e860908ea577d9bc62a2/diff:/var/lib/docker/overlay2/125d3d54cecb15956c70f4b83b04916a43a61bfb3369cd68d526df3a050c99ca/diff:/var/lib/docker/overlay2/bd6a25a35bf9557f5504a8f1699aec23b1f99b3cee759b071127a6b825127792/diff:/var/lib/docker/overlay2/841869eeaa2b3f6d80532c86c5958d1b569ebfe49bdc023f4938240eb32c460c/diff:/var/lib/docker/overlay2/53af33cfef2c951bbe139854de86a45a9b4522a730dcfcdba0e8aef5bba013d5/diff:/var/lib/docker/overlay2/788fb6bbe7fafd3e8c91620491e6c7294b1703e1fed64ec421805dca54268fbd/diff:/var/lib/docker/overlay2/63e29f39a531abd1f576616c3b182de1e746d7ba7da7147889ca71cb4969d798/diff:/var/lib/docker/overlay2/4a1575a4c462e14d21379c59f45a7653ca5963d5e2abfcb57e4e8326334ba636/diff:/var/lib/docker/overlay2/60ccd8a661b011293ac8b3c7349020d806bdf567d8bf4a4a980d2f434751dc28/diff:/var/lib/d
ocker/overlay2/5069964ca352097281a3aa1ae9798f119a5d0111883fadc653a63fab7479b84b/diff:/var/lib/docker/overlay2/1c17cf62515bd84f906731d7499e7e64c13fe757147b5302664e67ff33c019fd/diff:/var/lib/docker/overlay2/3d557835fd44d545bb0f0ff99056b78d7513aea12756fa365525be14d3f2710d/diff:/var/lib/docker/overlay2/3cbd8518b9522d8f4c7283cdaacf22c2ea514344053eb281abb9fbe9a30db988/diff:/var/lib/docker/overlay2/1c30ee4608a466008e442ac0daadd96d7c28261232764faf586ec912a1e4273f/diff:/var/lib/docker/overlay2/8da5b0b338638e6ec0c3ab8f12f504deaa5f30c44ce2f997fe9e3d93cf0a6578/diff:/var/lib/docker/overlay2/baa59ca8fcc21ec599b532c7913fb98b3584ae92f1fa1b3b5792b09a8b04b628/diff:/var/lib/docker/overlay2/d937051f9202d860e0d2c2f868663ffca6be1e0f0ea37a9d7230ac4abc2dc146/diff:/var/lib/docker/overlay2/152eb814da73947d0bd62e31bd9e197595cfe58c6f4f6d36d08b1b757f52dcdd/diff:/var/lib/docker/overlay2/f63c7d09120adbd799677147b896ff920385b98a817f02911c62cddb272c9677/diff:/var/lib/docker/overlay2/eadfcf09672c7ad8e692de8cfc9b4a20d79bcaf3827f26e41e4722502d0
e229b/diff:/var/lib/docker/overlay2/e05b2091781d3df19e453ccfac2a5f2beda8a9669cd2da8d227295e03e2bfd18/diff:/var/lib/docker/overlay2/43fedb373e70df75bc03358b636d5b1f4d7f3fcbb5e0f2982c20c11d5cb37b0a/diff:/var/lib/docker/overlay2/77d04d2e2be341bd44462a10a8140f6ff088360d20f8928c33b49c8baed4db09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/628a3552cb11112316232965c954561df9dc7b13451017267e49c5e748657eab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/628a3552cb11112316232965c954561df9dc7b13451017267e49c5e748657eab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/628a3552cb11112316232965c954561df9dc7b13451017267e49c5e748657eab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-935044",
	                "Source": "/var/lib/docker/volumes/running-upgrade-935044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-935044",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-935044",
	                "name.minikube.sigs.k8s.io": "running-upgrade-935044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b67d3561f20259d828e3bcbc5ff43f4bdae898443817a6b7da1beaa408b56d8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33608"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33607"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33606"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33605"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7b67d3561f20",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-935044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.133"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ed4dc6524730",
	                        "running-upgrade-935044"
	                    ],
	                    "NetworkID": "0fa1d93d2babe148bf0af6581aa7fe12d8d1f841fc3be98ee5028e13cdf9dcd2",
	                    "EndpointID": "ec628e74284d5bcd54db3c00c59e772c08416786efa8806adb137f5cf35e9569",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.133",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:85",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-935044 -n running-upgrade-935044
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-935044 -n running-upgrade-935044: exit status 4 (514.672808ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 20:38:53.960082  788342 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-935044" does not appear in /home/jenkins/minikube-integration/17116-652515/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-935044" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-935044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-935044
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-935044: (2.92561159s)
--- FAIL: TestRunningBinaryUpgrade (74.76s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.3898697286.exe start -p missing-upgrade-424992 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.3898697286.exe start -p missing-upgrade-424992 --memory=2200 --driver=docker  --container-runtime=crio: (1m37.418322805s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-424992
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-424992: (2.764719598s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-424992
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-424992 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0906 20:35:28.132279  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-424992 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (35.40551461s)

                                                
                                                
-- stdout --
	* [missing-upgrade-424992] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-424992 in cluster missing-upgrade-424992
	* Pulling base image ...
	* docker "missing-upgrade-424992" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:35:14.532399  773005 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:35:14.532649  773005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:35:14.532661  773005 out.go:309] Setting ErrFile to fd 2...
	I0906 20:35:14.532667  773005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:35:14.533026  773005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:35:14.533597  773005 out.go:303] Setting JSON to false
	I0906 20:35:14.535043  773005 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11669,"bootTime":1694020846,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:35:14.535151  773005 start.go:138] virtualization:  
	I0906 20:35:14.540566  773005 out.go:177] * [missing-upgrade-424992] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:35:14.542709  773005 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:35:14.542784  773005 notify.go:220] Checking for updates...
	I0906 20:35:14.546741  773005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:35:14.548479  773005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:35:14.550372  773005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:35:14.552332  773005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:35:14.554185  773005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:35:14.556711  773005 config.go:182] Loaded profile config "missing-upgrade-424992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:35:14.559055  773005 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0906 20:35:14.560930  773005 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:35:14.595708  773005 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:35:14.595808  773005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:35:14.704587  773005 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-09-06 20:35:14.691911834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:35:14.704694  773005 docker.go:294] overlay module found
	I0906 20:35:14.707872  773005 out.go:177] * Using the docker driver based on existing profile
	I0906 20:35:14.709747  773005 start.go:298] selected driver: docker
	I0906 20:35:14.709765  773005 start.go:902] validating driver "docker" against &{Name:missing-upgrade-424992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-424992 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:35:14.709875  773005 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:35:14.710500  773005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:35:14.844205  773005 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-09-06 20:35:14.831326708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:35:14.844515  773005 cni.go:84] Creating CNI manager for ""
	I0906 20:35:14.844524  773005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:35:14.844534  773005 start_flags.go:321] config:
	{Name:missing-upgrade-424992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-424992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:35:14.847809  773005 out.go:177] * Starting control plane node missing-upgrade-424992 in cluster missing-upgrade-424992
	I0906 20:35:14.849879  773005 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:35:14.851805  773005 out.go:177] * Pulling base image ...
	I0906 20:35:14.853644  773005 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0906 20:35:14.853894  773005 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0906 20:35:14.900608  773005 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0906 20:35:14.900822  773005 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0906 20:35:14.901380  773005 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0906 20:35:14.930661  773005 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0906 20:35:14.930815  773005 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/missing-upgrade-424992/config.json ...
	I0906 20:35:14.931531  773005 cache.go:107] acquiring lock: {Name:mk761ea5917e65ea5320237ae9d3fd919647d74d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.931647  773005 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 20:35:14.931656  773005 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.728µs
	I0906 20:35:14.931681  773005 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 20:35:14.931690  773005 cache.go:107] acquiring lock: {Name:mk22f096c6a91c8e67a172b4be8ed0577944fdba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.931777  773005 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:35:14.932039  773005 cache.go:107] acquiring lock: {Name:mk6a4b577aeafaa6ec13d04d8bb7a342c256843b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.932197  773005 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0906 20:35:14.932447  773005 cache.go:107] acquiring lock: {Name:mk1a4e838c2ad274a72380629743f1b35f47dd39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.932542  773005 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0906 20:35:14.932764  773005 cache.go:107] acquiring lock: {Name:mkc27320f8e3da16932e91e3f74bf5d5b33dc664 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.932844  773005 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0906 20:35:14.933070  773005 cache.go:107] acquiring lock: {Name:mk53179198066eaf3115f5ed6bbe3ab3db1522c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.933152  773005 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0906 20:35:14.935031  773005 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0906 20:35:14.935399  773005 cache.go:107] acquiring lock: {Name:mk627e07c0eeaa37b5facf9ad8431a66a5f5c500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.935528  773005 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:35:14.935789  773005 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0906 20:35:14.936475  773005 cache.go:107] acquiring lock: {Name:mk9a640a08153bc795cd4dd4cfaabc34e6d59789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:14.936570  773005 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:35:14.937575  773005 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0906 20:35:14.939136  773005 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0906 20:35:14.939613  773005 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:35:14.940207  773005 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:35:14.941506  773005 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:35:15.354581  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0906 20:35:15.357836  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0906 20:35:15.364349  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0906 20:35:15.388726  773005 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0906 20:35:15.398492  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W0906 20:35:15.392009  773005 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0906 20:35:15.403138  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W0906 20:35:15.405559  773005 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0906 20:35:15.405645  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0906 20:35:15.457842  773005 cache.go:162] opening:  /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0906 20:35:15.547025  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0906 20:35:15.547097  773005 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 615.406588ms
	I0906 20:35:15.547124  773005 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  897.34 KiB / 287.99 MiB [] 0.30% ? p/s ?I0906 20:35:15.933420  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0906 20:35:15.933502  773005 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 997.030561ms
	I0906 20:35:15.933532  773005 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0906 20:35:16.029263  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0906 20:35:16.029335  773005 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.0962764s
	I0906 20:35:16.029362  773005 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  8.50 MiB / 287.99 MiB [>_] 2.95% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.34 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.34 MiB I0906 20:35:16.526655  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0906 20:35:16.526736  773005 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.59429315s
	I0906 20:35:16.526765  773005 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.34 MiB I0906 20:35:16.791411  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0906 20:35:16.791440  773005 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.858688153s
	I0906 20:35:16.791454  773005 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.95 MiB / 287.99 MiB  9.01% 39.62 MiB     > gcr.io/k8s-minikube/kicbase...:  37.94 MiB / 287.99 MiB  13.17% 39.62 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 39.62 MiB    > gcr.io/k8s-minikube/kicbase...:  58.19 MiB / 287.99 MiB  20.20% 40.53 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 40.53 MiB    > gcr.io/k8s-minikube/kicbase...:  75.11 MiB / 287.99 MiB  26.08% 40.53 MiB    > gcr.io/k8s-minikube/kicbase...:  91.29 MiB / 287.99 MiB  31.70% 41.47 MiBI0906 20:35:18.126381  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0906 20:35:18.130084  773005 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 3.198052362s
	I0906 20:35:18.130132  773005 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  107.79 MiB / 287.99 MiB  37.43% 41.47 Mi    > gcr.io/k8s-minikube/kicbase...:  124.25 MiB / 287.99 MiB  43.14% 41.47 Mi    > gcr.io/k8s-minikube/kicbase...:  141.62 MiB / 287.99 MiB  49.18% 44.21 Mi    > gcr.io/k8s-minikube/kicbase...:  162.01 MiB / 287.99 MiB  56.26% 44.21 MiI0906 20:35:18.875315  773005 cache.go:157] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0906 20:35:18.875347  773005 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.939953711s
	I0906 20:35:18.875382  773005 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0906 20:35:18.875393  773005 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 44.21 Mi    > gcr.io/k8s-minikube/kicbase...:  173.47 MiB / 287.99 MiB  60.24% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  187.73 MiB / 287.99 MiB  65.18% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  198.62 MiB / 287.99 MiB  68.97% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 45.79 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 45.79 Mi    > gcr.io/k8s-minikube/kicbase...:  229.59 MiB / 287.99 MiB  79.72% 45.79 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 45.88 Mi    > gcr.io/k8s-minikube/kicbase...:  246.06 MiB / 287.99 MiB  85.44% 45.88 Mi    > gcr.io/k8s-minikube/kicbase...:  260.59 MiB / 287.99 MiB  90.48% 45.88 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 45.83 Mi    > gcr.io/k8s-minikube/kicbase...:  272.98 MiB / 287.99 MiB  94.79% 45.83 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.
59% 45.83 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 45.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 45.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 45.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.41 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.41 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.41 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 39.67 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 39.67 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 38.35 MI0906 20:35:23.143256  773005 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0906 20:35:23.143281  773005 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0906 20:35:23.322682  773005 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0906 20:35:23.322720  773005 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:35:23.322760  773005 start.go:365] acquiring machines lock for missing-upgrade-424992: {Name:mkda0e517d36359d5e731e4d8d893d969652af44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:35:23.322832  773005 start.go:369] acquired machines lock for "missing-upgrade-424992" in 46.908µs
	I0906 20:35:23.322856  773005 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:35:23.322868  773005 fix.go:54] fixHost starting: 
	I0906 20:35:23.323164  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:23.341616  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:23.341679  773005 fix.go:102] recreateIfNeeded on missing-upgrade-424992: state= err=unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:23.341698  773005 fix.go:107] machineExists: false. err=machine does not exist
	I0906 20:35:23.344461  773005 out.go:177] * docker "missing-upgrade-424992" container is missing, will recreate.
	I0906 20:35:23.348844  773005 delete.go:124] DEMOLISHING missing-upgrade-424992 ...
	I0906 20:35:23.348948  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:23.366793  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	W0906 20:35:23.366869  773005 stop.go:75] unable to get state: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:23.366888  773005 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:23.367339  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:23.397677  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:23.397745  773005 delete.go:82] Unable to get host status for missing-upgrade-424992, assuming it has already been deleted: state: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:23.397804  773005 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-424992
	W0906 20:35:23.420727  773005 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-424992 returned with exit code 1
	I0906 20:35:23.420760  773005 kic.go:367] could not find the container missing-upgrade-424992 to remove it. will try anyways
	I0906 20:35:23.420813  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:23.451895  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	W0906 20:35:23.451973  773005 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:23.452806  773005 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-424992 /bin/bash -c "sudo init 0"
	W0906 20:35:23.474402  773005 cli_runner.go:211] docker exec --privileged -t missing-upgrade-424992 /bin/bash -c "sudo init 0" returned with exit code 1
	I0906 20:35:23.474435  773005 oci.go:647] error shutdown missing-upgrade-424992: docker exec --privileged -t missing-upgrade-424992 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:24.474611  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:24.492021  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:24.492096  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:24.492111  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:24.492142  773005 retry.go:31] will retry after 388.700068ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:24.881754  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:24.901711  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:24.901776  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:24.901794  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:24.901818  773005 retry.go:31] will retry after 1.034392767s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:25.936422  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:25.963298  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:25.963353  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:25.963362  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:25.963384  773005 retry.go:31] will retry after 1.500664529s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:27.464779  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:27.515446  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:27.515510  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:27.515525  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:27.515549  773005 retry.go:31] will retry after 1.283261881s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:28.798998  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:28.817186  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:28.817243  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:28.817254  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:28.817277  773005 retry.go:31] will retry after 2.016132889s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:30.833633  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:30.852042  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:30.852101  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:30.852113  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:30.852137  773005 retry.go:31] will retry after 3.418852901s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:34.271167  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:34.290607  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:34.290670  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:34.290683  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:34.290708  773005 retry.go:31] will retry after 5.68023512s: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:39.971962  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:39.993078  773005 cli_runner.go:211] docker container inspect missing-upgrade-424992 --format={{.State.Status}} returned with exit code 1
	I0906 20:35:39.993139  773005 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	I0906 20:35:39.993159  773005 oci.go:661] temporary error: container missing-upgrade-424992 status is  but expect it to be exited
	I0906 20:35:39.993193  773005 oci.go:88] couldn't shut down missing-upgrade-424992 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-424992": docker container inspect missing-upgrade-424992 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-424992
	 
	I0906 20:35:39.993256  773005 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-424992
	I0906 20:35:40.036245  773005 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-424992
	W0906 20:35:40.054021  773005 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-424992 returned with exit code 1
	I0906 20:35:40.054222  773005 cli_runner.go:164] Run: docker network inspect missing-upgrade-424992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:35:40.072186  773005 cli_runner.go:164] Run: docker network rm missing-upgrade-424992
	I0906 20:35:40.173256  773005 fix.go:114] Sleeping 1 second for extra luck!
	I0906 20:35:41.173404  773005 start.go:125] createHost starting for "" (driver="docker")
	I0906 20:35:41.175291  773005 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0906 20:35:41.175461  773005 start.go:159] libmachine.API.Create for "missing-upgrade-424992" (driver="docker")
	I0906 20:35:41.175488  773005 client.go:168] LocalClient.Create starting
	I0906 20:35:41.175563  773005 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem
	I0906 20:35:41.175604  773005 main.go:141] libmachine: Decoding PEM data...
	I0906 20:35:41.175623  773005 main.go:141] libmachine: Parsing certificate...
	I0906 20:35:41.175683  773005 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem
	I0906 20:35:41.175706  773005 main.go:141] libmachine: Decoding PEM data...
	I0906 20:35:41.175720  773005 main.go:141] libmachine: Parsing certificate...
	I0906 20:35:41.175975  773005 cli_runner.go:164] Run: docker network inspect missing-upgrade-424992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 20:35:41.192867  773005 cli_runner.go:211] docker network inspect missing-upgrade-424992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 20:35:41.192944  773005 network_create.go:281] running [docker network inspect missing-upgrade-424992] to gather additional debugging logs...
	I0906 20:35:41.192963  773005 cli_runner.go:164] Run: docker network inspect missing-upgrade-424992
	W0906 20:35:41.212219  773005 cli_runner.go:211] docker network inspect missing-upgrade-424992 returned with exit code 1
	I0906 20:35:41.212248  773005 network_create.go:284] error running [docker network inspect missing-upgrade-424992]: docker network inspect missing-upgrade-424992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-424992 not found
	I0906 20:35:41.212261  773005 network_create.go:286] output of [docker network inspect missing-upgrade-424992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-424992 not found
	
	** /stderr **
	I0906 20:35:41.212325  773005 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:35:41.231196  773005 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f4f092eb4771 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:82:b7:f8:ad} reservation:<nil>}
	I0906 20:35:41.231560  773005 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-35fe0716e990 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d1:9e:81:86} reservation:<nil>}
	I0906 20:35:41.232075  773005 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5c128efc1ad2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b2:d3:9d:b4} reservation:<nil>}
	I0906 20:35:41.232558  773005 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6f6b0}
	I0906 20:35:41.232577  773005 network_create.go:123] attempt to create docker network missing-upgrade-424992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0906 20:35:41.232634  773005 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-424992 missing-upgrade-424992
	I0906 20:35:41.306352  773005 network_create.go:107] docker network missing-upgrade-424992 192.168.76.0/24 created
	I0906 20:35:41.306383  773005 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-424992" container
	I0906 20:35:41.306456  773005 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 20:35:41.323170  773005 cli_runner.go:164] Run: docker volume create missing-upgrade-424992 --label name.minikube.sigs.k8s.io=missing-upgrade-424992 --label created_by.minikube.sigs.k8s.io=true
	I0906 20:35:41.342017  773005 oci.go:103] Successfully created a docker volume missing-upgrade-424992
	I0906 20:35:41.342140  773005 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-424992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-424992 --entrypoint /usr/bin/test -v missing-upgrade-424992:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0906 20:35:43.079552  773005 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-424992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-424992 --entrypoint /usr/bin/test -v missing-upgrade-424992:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.737366874s)
	I0906 20:35:43.079582  773005 oci.go:107] Successfully prepared a docker volume missing-upgrade-424992
	I0906 20:35:43.079610  773005 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0906 20:35:43.079763  773005 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 20:35:43.079874  773005 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 20:35:43.149184  773005 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-424992 --name missing-upgrade-424992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-424992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-424992 --network missing-upgrade-424992 --ip 192.168.76.2 --volume missing-upgrade-424992:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0906 20:35:43.524160  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Running}}
	I0906 20:35:43.548809  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	I0906 20:35:43.572563  773005 cli_runner.go:164] Run: docker exec missing-upgrade-424992 stat /var/lib/dpkg/alternatives/iptables
	I0906 20:35:43.652635  773005 oci.go:144] the created container "missing-upgrade-424992" has a running status.
	I0906 20:35:43.652660  773005 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa...
	I0906 20:35:44.102120  773005 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 20:35:44.134445  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	I0906 20:35:44.155741  773005 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 20:35:44.155762  773005 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-424992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 20:35:44.262581  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	I0906 20:35:44.292897  773005 machine.go:88] provisioning docker machine ...
	I0906 20:35:44.293122  773005 ubuntu.go:169] provisioning hostname "missing-upgrade-424992"
	I0906 20:35:44.293266  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:44.316112  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:44.316578  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:44.316598  773005 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-424992 && echo "missing-upgrade-424992" | sudo tee /etc/hostname
	I0906 20:35:44.482038  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-424992
	
	I0906 20:35:44.482144  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:44.510753  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:44.511204  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:44.511221  773005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-424992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-424992/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-424992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:35:44.659296  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:35:44.659323  773005 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:35:44.659351  773005 ubuntu.go:177] setting up certificates
	I0906 20:35:44.659360  773005 provision.go:83] configureAuth start
	I0906 20:35:44.659422  773005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424992
	I0906 20:35:44.683721  773005 provision.go:138] copyHostCerts
	I0906 20:35:44.683778  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:35:44.683787  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:35:44.683864  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:35:44.683947  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:35:44.683952  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:35:44.683979  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:35:44.684027  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:35:44.684031  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:35:44.684056  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:35:44.684100  773005 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-424992 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-424992]
	I0906 20:35:45.097227  773005 provision.go:172] copyRemoteCerts
	I0906 20:35:45.097388  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:35:45.097508  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:45.131896  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:45.276028  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 20:35:45.311744  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:35:45.358439  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:35:45.393225  773005 provision.go:86] duration metric: configureAuth took 733.845205ms
	I0906 20:35:45.393252  773005 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:35:45.393484  773005 config.go:182] Loaded profile config "missing-upgrade-424992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:35:45.393612  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:45.419118  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:45.419625  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:45.419652  773005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:35:45.861562  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:35:45.861587  773005 machine.go:91] provisioned docker machine in 1.568617601s
	I0906 20:35:45.861597  773005 client.go:171] LocalClient.Create took 4.686101825s
	I0906 20:35:45.861607  773005 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-424992" took 4.686147856s
	I0906 20:35:45.861614  773005 start.go:300] post-start starting for "missing-upgrade-424992" (driver="docker")
	I0906 20:35:45.861624  773005 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:35:45.861694  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:35:45.861743  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:45.885206  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:45.983190  773005 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:35:45.987176  773005 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:35:45.987201  773005 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:35:45.987211  773005 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:35:45.987218  773005 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0906 20:35:45.987227  773005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:35:45.987291  773005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:35:45.987392  773005 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:35:45.987504  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:35:45.996522  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:35:46.032939  773005 start.go:303] post-start completed in 171.310008ms
	I0906 20:35:46.033391  773005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424992
	I0906 20:35:46.052093  773005 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/missing-upgrade-424992/config.json ...
	I0906 20:35:46.052483  773005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:35:46.052536  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:46.074736  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:46.173232  773005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:35:46.179107  773005 start.go:128] duration metric: createHost completed in 5.005667752s
	I0906 20:35:46.179201  773005 cli_runner.go:164] Run: docker container inspect missing-upgrade-424992 --format={{.State.Status}}
	W0906 20:35:46.201585  773005 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:35:46.201609  773005 machine.go:88] provisioning docker machine ...
	I0906 20:35:46.201628  773005 ubuntu.go:169] provisioning hostname "missing-upgrade-424992"
	I0906 20:35:46.201692  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:46.220381  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:46.220823  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:46.220840  773005 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-424992 && echo "missing-upgrade-424992" | sudo tee /etc/hostname
	I0906 20:35:46.374333  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-424992
	
	I0906 20:35:46.374454  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:46.398545  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:46.398976  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:46.399001  773005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-424992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-424992/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-424992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:35:46.543511  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:35:46.543534  773005 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:35:46.543551  773005 ubuntu.go:177] setting up certificates
	I0906 20:35:46.543561  773005 provision.go:83] configureAuth start
	I0906 20:35:46.543628  773005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424992
	I0906 20:35:46.563144  773005 provision.go:138] copyHostCerts
	I0906 20:35:46.563227  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:35:46.563242  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:35:46.563322  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:35:46.563412  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:35:46.563421  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:35:46.563449  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:35:46.563503  773005 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:35:46.563511  773005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:35:46.563538  773005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:35:46.563581  773005 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-424992 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-424992]
	I0906 20:35:48.056479  773005 provision.go:172] copyRemoteCerts
	I0906 20:35:48.056559  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:35:48.056604  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.078198  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:48.179566  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:35:48.211012  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 20:35:48.240476  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:35:48.263947  773005 provision.go:86] duration metric: configureAuth took 1.720373331s
	I0906 20:35:48.264015  773005 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:35:48.264237  773005 config.go:182] Loaded profile config "missing-upgrade-424992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:35:48.264407  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.291866  773005 main.go:141] libmachine: Using SSH client type: native
	I0906 20:35:48.292383  773005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33591 <nil> <nil>}
	I0906 20:35:48.292402  773005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:35:48.620412  773005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:35:48.620482  773005 machine.go:91] provisioned docker machine in 2.418854796s
	I0906 20:35:48.620517  773005 start.go:300] post-start starting for "missing-upgrade-424992" (driver="docker")
	I0906 20:35:48.620557  773005 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:35:48.620658  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:35:48.620736  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.645877  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:48.749751  773005 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:35:48.753899  773005 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:35:48.753967  773005 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:35:48.753985  773005 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:35:48.753993  773005 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0906 20:35:48.754002  773005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:35:48.754090  773005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:35:48.754171  773005 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:35:48.754282  773005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:35:48.764330  773005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:35:48.787543  773005 start.go:303] post-start completed in 166.983557ms
	I0906 20:35:48.787650  773005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:35:48.787714  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.828579  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:48.928451  773005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:35:48.934247  773005 fix.go:56] fixHost completed within 25.611376475s
	I0906 20:35:48.934269  773005 start.go:83] releasing machines lock for "missing-upgrade-424992", held for 25.61142514s
	I0906 20:35:48.934339  773005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-424992
	I0906 20:35:48.955352  773005 ssh_runner.go:195] Run: cat /version.json
	I0906 20:35:48.955407  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.955688  773005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:35:48.955752  773005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-424992
	I0906 20:35:48.975146  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	I0906 20:35:48.979858  773005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/missing-upgrade-424992/id_rsa Username:docker}
	W0906 20:35:49.070893  773005 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 20:35:49.070978  773005 ssh_runner.go:195] Run: systemctl --version
	I0906 20:35:49.203195  773005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:35:49.307829  773005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:35:49.314666  773005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:35:49.343737  773005 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:35:49.343886  773005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:35:49.378387  773005 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:35:49.378459  773005 start.go:466] detecting cgroup driver to use...
	I0906 20:35:49.378504  773005 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:35:49.378589  773005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:35:49.407896  773005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:35:49.420414  773005 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:35:49.420525  773005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:35:49.433015  773005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:35:49.446677  773005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0906 20:35:49.460791  773005 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0906 20:35:49.460859  773005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:35:49.567208  773005 docker.go:212] disabling docker service ...
	I0906 20:35:49.567321  773005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:35:49.582883  773005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:35:49.597607  773005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:35:49.699947  773005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:35:49.805922  773005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:35:49.819861  773005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:35:49.838710  773005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:35:49.838776  773005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:35:49.853010  773005 out.go:177] 
	W0906 20:35:49.855075  773005 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0906 20:35:49.855106  773005 out.go:239] * 
	* 
	W0906 20:35:49.856264  773005 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:35:49.858740  773005 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-424992 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-09-06 20:35:49.892135283 +0000 UTC m=+2352.027748064
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-424992
helpers_test.go:235: (dbg) docker inspect missing-upgrade-424992:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7",
	        "Created": "2023-09-06T20:35:43.166153169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 774121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:35:43.515965765Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7/hosts",
	        "LogPath": "/var/lib/docker/containers/f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7/f6306ae41575579ca244d725b5d0a86a9ee126d9fe8727f25ff9038bf530edc7-json.log",
	        "Name": "/missing-upgrade-424992",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-424992:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-424992",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d09ff46a5046b7c79964fde453c6b25ed3250dfa07b2c3156d10d56ce11beedf-init/diff:/var/lib/docker/overlay2/3ca3844803c20261fffbd3abf87c36258201bdd8b720baafe53fb5f0e1cef2b2/diff:/var/lib/docker/overlay2/42f4fa8823ae920975ef4b3e77104e0ce5537ee0d647c4e560a7380c0dbef7ce/diff:/var/lib/docker/overlay2/4f4312fd1a6c349a6a0110f052579f981d52130f30c9a6b73eac5188cc2e6d39/diff:/var/lib/docker/overlay2/3e117e25284a6c23658700ed8040c9aab61a8c190c2ee6ad851e33caaee943dc/diff:/var/lib/docker/overlay2/06a9f3e13f8e054f47d37dbd717e9a6875582c5d40fe418be2a5f58c386bf224/diff:/var/lib/docker/overlay2/69a6bfd28c5dbbf4c3372a6021432ea658a5d47bc023bbf2fe7bd13dd5886351/diff:/var/lib/docker/overlay2/f9fe5a263fe11ece55dd4ee5567435e2ba4238ebd7115f6afd9f199dcacc06c3/diff:/var/lib/docker/overlay2/236b696cc98c9940476470340881a416040b87d49f549c85c4a10ba45f761b7f/diff:/var/lib/docker/overlay2/fbc634275957713c088d273f19a056241465381b555d462d547cb2331e7cd4e3/diff:/var/lib/docker/overlay2/df3266
fd6b2539a49ec8943079b4b4404c7ce7733bad61cfb979e8bcb9452938/diff:/var/lib/docker/overlay2/11bcd32f602b60a09d522c9c1b2adb997d93e760836c2b167cb3fe7013a17bde/diff:/var/lib/docker/overlay2/f1b8d3bf324890c8146bc377bf84f5fc2cd5dadc7a40e860908ea577d9bc62a2/diff:/var/lib/docker/overlay2/125d3d54cecb15956c70f4b83b04916a43a61bfb3369cd68d526df3a050c99ca/diff:/var/lib/docker/overlay2/bd6a25a35bf9557f5504a8f1699aec23b1f99b3cee759b071127a6b825127792/diff:/var/lib/docker/overlay2/841869eeaa2b3f6d80532c86c5958d1b569ebfe49bdc023f4938240eb32c460c/diff:/var/lib/docker/overlay2/53af33cfef2c951bbe139854de86a45a9b4522a730dcfcdba0e8aef5bba013d5/diff:/var/lib/docker/overlay2/788fb6bbe7fafd3e8c91620491e6c7294b1703e1fed64ec421805dca54268fbd/diff:/var/lib/docker/overlay2/63e29f39a531abd1f576616c3b182de1e746d7ba7da7147889ca71cb4969d798/diff:/var/lib/docker/overlay2/4a1575a4c462e14d21379c59f45a7653ca5963d5e2abfcb57e4e8326334ba636/diff:/var/lib/docker/overlay2/60ccd8a661b011293ac8b3c7349020d806bdf567d8bf4a4a980d2f434751dc28/diff:/var/lib/d
ocker/overlay2/5069964ca352097281a3aa1ae9798f119a5d0111883fadc653a63fab7479b84b/diff:/var/lib/docker/overlay2/1c17cf62515bd84f906731d7499e7e64c13fe757147b5302664e67ff33c019fd/diff:/var/lib/docker/overlay2/3d557835fd44d545bb0f0ff99056b78d7513aea12756fa365525be14d3f2710d/diff:/var/lib/docker/overlay2/3cbd8518b9522d8f4c7283cdaacf22c2ea514344053eb281abb9fbe9a30db988/diff:/var/lib/docker/overlay2/1c30ee4608a466008e442ac0daadd96d7c28261232764faf586ec912a1e4273f/diff:/var/lib/docker/overlay2/8da5b0b338638e6ec0c3ab8f12f504deaa5f30c44ce2f997fe9e3d93cf0a6578/diff:/var/lib/docker/overlay2/baa59ca8fcc21ec599b532c7913fb98b3584ae92f1fa1b3b5792b09a8b04b628/diff:/var/lib/docker/overlay2/d937051f9202d860e0d2c2f868663ffca6be1e0f0ea37a9d7230ac4abc2dc146/diff:/var/lib/docker/overlay2/152eb814da73947d0bd62e31bd9e197595cfe58c6f4f6d36d08b1b757f52dcdd/diff:/var/lib/docker/overlay2/f63c7d09120adbd799677147b896ff920385b98a817f02911c62cddb272c9677/diff:/var/lib/docker/overlay2/eadfcf09672c7ad8e692de8cfc9b4a20d79bcaf3827f26e41e4722502d0
e229b/diff:/var/lib/docker/overlay2/e05b2091781d3df19e453ccfac2a5f2beda8a9669cd2da8d227295e03e2bfd18/diff:/var/lib/docker/overlay2/43fedb373e70df75bc03358b636d5b1f4d7f3fcbb5e0f2982c20c11d5cb37b0a/diff:/var/lib/docker/overlay2/77d04d2e2be341bd44462a10a8140f6ff088360d20f8928c33b49c8baed4db09/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d09ff46a5046b7c79964fde453c6b25ed3250dfa07b2c3156d10d56ce11beedf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d09ff46a5046b7c79964fde453c6b25ed3250dfa07b2c3156d10d56ce11beedf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d09ff46a5046b7c79964fde453c6b25ed3250dfa07b2c3156d10d56ce11beedf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-424992",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-424992/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-424992",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-424992",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-424992",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab17d89d8a06b528e958653223ee169a4b57dc6bb70a99dc004c34457594359f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33590"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33587"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33589"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33588"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ab17d89d8a06",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-424992": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6306ae41575",
	                        "missing-upgrade-424992"
	                    ],
	                    "NetworkID": "6a08a325b035ccc40049922cf1e1fa7a43181722737e894929c606e3bc1041ba",
	                    "EndpointID": "03fe42325548789d1869f9e53e7b7dd0a5afbb92e0e6a3e57ee807036f65faa2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-424992 -n missing-upgrade-424992
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-424992 -n missing-upgrade-424992: exit status 6 (387.072835ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 20:35:50.290178  775123 status.go:415] kubeconfig endpoint: got: 192.168.59.33:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-424992" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-424992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-424992
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-424992: (2.159775462s)
--- FAIL: TestMissingContainerUpgrade (139.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (73.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-056574 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-056574 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.250936884s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-056574] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-056574 in cluster pause-056574
	* Pulling base image ...
	* Updating the running docker "pause-056574" container ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-056574" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:33:34.313466  765316 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:33:34.313689  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.313701  765316 out.go:309] Setting ErrFile to fd 2...
	I0906 20:33:34.313707  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.314132  765316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:33:34.314582  765316 out.go:303] Setting JSON to false
	I0906 20:33:34.315720  765316 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11569,"bootTime":1694020846,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:33:34.315790  765316 start.go:138] virtualization:  
	I0906 20:33:34.319340  765316 out.go:177] * [pause-056574] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:33:34.327361  765316 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:33:34.330841  765316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:33:34.327572  765316 notify.go:220] Checking for updates...
	I0906 20:33:34.335983  765316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:33:34.338698  765316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:33:34.340598  765316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:33:34.343246  765316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:33:34.345753  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:34.346369  765316 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:33:34.375935  765316 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:33:34.376035  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.563800  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.549453742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.563907  765316 docker.go:294] overlay module found
	I0906 20:33:34.567373  765316 out.go:177] * Using the docker driver based on existing profile
	I0906 20:33:34.569301  765316 start.go:298] selected driver: docker
	I0906 20:33:34.569317  765316 start.go:902] validating driver "docker" against &{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.569447  765316 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:33:34.569563  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.689940  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.677145194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.690392  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:34.690403  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:34.690414  765316 start_flags.go:321] config:
	{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.693526  765316 out.go:177] * Starting control plane node pause-056574 in cluster pause-056574
	I0906 20:33:34.695389  765316 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:33:34.697433  765316 out.go:177] * Pulling base image ...
	I0906 20:33:34.699855  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:34.700170  765316 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 20:33:34.700208  765316 cache.go:57] Caching tarball of preloaded images
	I0906 20:33:34.700304  765316 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 20:33:34.700314  765316 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 20:33:34.700426  765316 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:33:34.700814  765316 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/config.json ...
	I0906 20:33:34.732284  765316 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:33:34.732306  765316 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:33:34.732324  765316 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:33:34.732372  765316 start.go:365] acquiring machines lock for pause-056574: {Name:mk90a09ef8a87298b0c7a90b2424c10110e9aa4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:33:34.732448  765316 start.go:369] acquired machines lock for "pause-056574" in 50.027µs
	I0906 20:33:34.732479  765316 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:33:34.732489  765316 fix.go:54] fixHost starting: 
	I0906 20:33:34.732759  765316 cli_runner.go:164] Run: docker container inspect pause-056574 --format={{.State.Status}}
	I0906 20:33:34.750767  765316 fix.go:102] recreateIfNeeded on pause-056574: state=Running err=<nil>
	W0906 20:33:34.750796  765316 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:33:34.752565  765316 out.go:177] * Updating the running docker "pause-056574" container ...
	I0906 20:33:34.754518  765316 machine.go:88] provisioning docker machine ...
	I0906 20:33:34.754566  765316 ubuntu.go:169] provisioning hostname "pause-056574"
	I0906 20:33:34.754648  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.773035  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.773516  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.773535  765316 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056574 && echo "pause-056574" | sudo tee /etc/hostname
	I0906 20:33:34.929932  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056574
	
	I0906 20:33:34.930014  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.953757  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.954237  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.954262  765316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056574/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:33:35.103845  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:33:35.103880  765316 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:33:35.103902  765316 ubuntu.go:177] setting up certificates
	I0906 20:33:35.103931  765316 provision.go:83] configureAuth start
	I0906 20:33:35.104005  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:35.125515  765316 provision.go:138] copyHostCerts
	I0906 20:33:35.125587  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:33:35.125600  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:33:35.125675  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:33:35.125782  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:33:35.125792  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:33:35.125821  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:33:35.125893  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:33:35.125901  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:33:35.125930  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:33:35.125987  765316 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.pause-056574 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-056574]
	I0906 20:33:35.464852  765316 provision.go:172] copyRemoteCerts
	I0906 20:33:35.464921  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:33:35.464976  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.485046  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:35.588536  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:33:35.625299  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 20:33:35.663087  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:33:35.701708  765316 provision.go:86] duration metric: configureAuth took 597.759466ms
	I0906 20:33:35.701734  765316 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:33:35.701979  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:35.702146  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.741036  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:35.742277  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:35.742314  765316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:33:41.370326  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:33:41.370355  765316 machine.go:91] provisioned docker machine in 6.615822493s
	I0906 20:33:41.370371  765316 start.go:300] post-start starting for "pause-056574" (driver="docker")
	I0906 20:33:41.370382  765316 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:33:41.370454  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:33:41.370833  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.399017  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.505737  765316 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:33:41.513548  765316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:33:41.513586  765316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:33:41.513598  765316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:33:41.513605  765316 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:33:41.513616  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:33:41.513686  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:33:41.513786  765316 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:33:41.513902  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:33:41.525815  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:41.559398  765316 start.go:303] post-start completed in 189.009691ms
	I0906 20:33:41.559509  765316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:33:41.559561  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.578582  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.672501  765316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:33:41.679314  765316 fix.go:56] fixHost completed within 6.946815867s
	I0906 20:33:41.679340  765316 start.go:83] releasing machines lock for "pause-056574", held for 6.94688081s
	I0906 20:33:41.679452  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:41.700000  765316 ssh_runner.go:195] Run: cat /version.json
	I0906 20:33:41.700072  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.700343  765316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:33:41.700410  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.720050  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.730256  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.814929  765316 ssh_runner.go:195] Run: systemctl --version
	I0906 20:33:41.962994  765316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:33:42.130540  765316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:33:42.138170  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.151518  765316 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:33:42.151632  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.171316  765316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 20:33:42.171343  765316 start.go:466] detecting cgroup driver to use...
	I0906 20:33:42.171384  765316 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:33:42.171440  765316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:33:42.189434  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:33:42.207051  765316 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:33:42.207121  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:33:42.229017  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:33:42.247146  765316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:33:42.390753  765316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:33:42.517835  765316 docker.go:212] disabling docker service ...
	I0906 20:33:42.517907  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:33:42.533863  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:33:42.547730  765316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:33:42.677944  765316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:33:42.808903  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:33:42.822945  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:33:42.844218  765316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 20:33:42.844284  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.861002  765316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:33:42.861102  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.873532  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.885916  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.898763  765316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:33:42.910947  765316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:33:42.924563  765316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:33:42.936784  765316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:33:43.449022  765316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:33:46.252020  765316 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.802957999s)
	I0906 20:33:46.252075  765316 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:33:46.252129  765316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:33:46.270303  765316 start.go:534] Will wait 60s for crictl version
	I0906 20:33:46.270367  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:33:46.287200  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:33:46.390868  765316 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:33:46.390954  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.503270  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.585478  765316 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 20:33:46.587551  765316 cli_runner.go:164] Run: docker network inspect pause-056574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:33:46.612590  765316 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0906 20:33:46.621996  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:46.622086  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.679361  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.679387  765316 crio.go:415] Images already preloaded, skipping extraction
	I0906 20:33:46.679443  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.750400  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.750425  765316 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:33:46.750502  765316 ssh_runner.go:195] Run: crio config
	I0906 20:33:46.830543  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:46.830568  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:46.830591  765316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:33:46.830614  765316 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056574 NodeName:pause-056574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:33:46.830767  765316 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056574"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:33:46.830857  765316 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-056574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:33:46.830927  765316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 20:33:46.843205  765316 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:33:46.843294  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:33:46.854431  765316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0906 20:33:46.896420  765316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:33:46.934716  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0906 20:33:46.960797  765316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:33:46.966577  765316 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574 for IP: 192.168.67.2
	I0906 20:33:46.966617  765316 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:33:46.966754  765316 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:33:46.966796  765316 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:33:46.966880  765316 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key
	I0906 20:33:46.966941  765316 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key.c7fa3a9e
	I0906 20:33:46.966982  765316 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key
	I0906 20:33:46.967090  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:33:46.967119  765316 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:33:46.967129  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:33:46.967153  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:33:46.967182  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:33:46.967205  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:33:46.967253  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:46.967848  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 20:33:47.005937  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:33:47.051356  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:33:47.110191  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:33:47.555983  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:33:47.747732  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:33:47.961923  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:33:48.072519  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:33:48.289773  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:33:48.404448  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:33:48.532333  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:33:48.682016  765316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:33:48.774206  765316 ssh_runner.go:195] Run: openssl version
	I0906 20:33:48.810709  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:33:48.870390  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901746  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901812  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.939984  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:33:48.994584  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:33:49.046618  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061504  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061623  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.087874  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:33:49.139712  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:33:49.183818  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214747  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214885  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.248302  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:33:49.283927  765316 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:33:49.306853  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:33:49.342866  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:33:49.387558  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:33:49.420638  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:33:49.453227  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:33:49.486312  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:33:49.522285  765316 kubeadm.go:404] StartCluster: {Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:49.522473  765316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:33:49.522565  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:33:49.708977  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:33:49.709052  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:33:49.709072  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:49.709092  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:33:49.709127  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:33:49.709151  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:33:49.709172  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:33:49.709208  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:33:49.709228  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:33:49.709250  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:33:49.709284  765316 cri.go:89] found id: ""
	I0906 20:33:49.709369  765316 ssh_runner.go:195] Run: sudo runc list -f json
	I0906 20:33:49.843047  765316 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","pid":2643,"status":"running","bundle":"/run/containers/storage/overlay-containers/025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59/userdata","rootfs":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","created":"2023-09-06T20:33:47.754975222Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.40697841Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attemp
t\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"c
ontainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/08ab8438\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1/userdata","root
fs":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","created":"2023-09-06T20:32:33.117974564Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.931764147Z","io.kubernetes.cri-o.Imag
e":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/o
verlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/959d7a9b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","pid":2625,"status":"running","bundle":"/run/containers/storage/overlay-containers/34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087/userdata","rootfs":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","created":"2023-09-06T20:33:47.591827705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.res
tartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.259333074Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-
controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxNam
e":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/badcb97d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manage
r.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b71
2ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4/userdata","rootfs":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","created":"2023-09-06T20:33:00.586685495Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5b6be1","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5b6be1\",\"io.kubernetes.container.resta
rtCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:00.550150556Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw8hd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e90346fb-20dd-4265-8d3b-8f0a270025ce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw8hd_e90346fb-20dd-4265
-8d3b-8f0a270025ce/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/containers/kindnet-cni/e0d6d98f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/volumes/kubernetes.io~projected/kube-api-access-bvhwl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw8hd","io.kubernetes.pod.name
space":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e90346fb-20dd-4265-8d3b-8f0a270025ce","kubernetes.io/config.seen":"2023-09-06T20:32:58.685594541Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7/userdata","rootfs":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","created":"2023-09-06T20:33:31.702279839Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0a6b0f8","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.k
ubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0a6b0f8\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:31.638983323Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-5tvwb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d2358999-88bf-4ed4-b2ca-c2fb70773e36\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-5tvwb_d2358999-88bf-4ed4-b2ca-c2fb70773e36/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.ResolvPath":"/run/container
s/storage/overlay-containers/d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4
ed4-b2ca-c2fb70773e36/containers/coredns/8c896daa\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~projected/kube-api-access-v6dff\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-5tvwb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d2358999-88bf-4ed4-b2ca-c2fb70773e36","kubernetes.io/config.seen":"2023-09-06T20:33:31.246022801Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0/userdata","rootfs":"/var/lib/containers/storage/overlay/6373dce176da5954377857
8d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","created":"2023-09-06T20:33:43.556515142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dda786a5","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dda786a5\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:43.264099732Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io
.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"37fb3a22f6eccf83d612f100244ce554\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-056574_37fb3a22f6eccf83d612f100244ce554/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6373dce176da59543778578d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/21221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2
1221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/containers/etcd/cc2c9ec4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"37fb3a22f6eccf83d612f100244ce554","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"37fb3a22f6eccf83d612f100244ce554","kubernetes.io/config.seen":"2023-09-06T20:32:32.356228387Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1/userdata","rootfs":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","created":"2023-09-06T20:32:33.139041143Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name
":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.910534918Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"
io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kub
e-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/68b84355\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/931
d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d/userdata","rootfs":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","created":"2023-09-06T20:33:00.886032439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f7cf1de9","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f7cf1de9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.
cri-o.Created":"2023-09-06T20:33:00.832992491Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-mhjb5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2f662ac9-4819-4de1-a149-1427c9be35f4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-mhjb5_2f662ac9-4819-4de1-a149-1427c9be35f4/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.Resolv
Path":"/run/containers/storage/overlay-containers/dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/terminat
ion-log\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/containers/kube-proxy/0cfce1a2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~projected/kube-api-access-l7wqp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-mhjb5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2f662ac9-4819-4de1-a149-1427c9be35f4","kubernetes.io/config.seen":"2023-09-06T20:32:58.684519695Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0aa
8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3/userdata","rootfs":"/var/lib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","created":"2023-09-06T20:32:33.124641754Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.ku
bernetes.cri-o.ContainerID":"b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.953702159Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\
"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/ac34c3ca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-ce
rtificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","pid":2608,"status":"running","bundle":"/run/containers/storage/overlay-containers/b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/userdata","rootfs":"/var/lib/containers/storag
e/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","created":"2023-09-06T20:33:47.714197032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.251091707Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b98
41b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers
/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/ddbd8f57\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06
T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0906 20:33:49.843906  765316 cri.go:126] list returned 10 containers
	I0906 20:33:49.843958  765316 cri.go:129] container: {ID:025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 Status:running}
	I0906 20:33:49.843993  765316 cri.go:135] skipping {025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 running}: state = "running", want "paused"
	I0906 20:33:49.844030  765316 cri.go:129] container: {ID:1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 Status:stopped}
	I0906 20:33:49.844057  765316 cri.go:135] skipping {1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844079  765316 cri.go:129] container: {ID:34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 Status:running}
	I0906 20:33:49.844113  765316 cri.go:135] skipping {34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 running}: state = "running", want "paused"
	I0906 20:33:49.844137  765316 cri.go:129] container: {ID:4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 Status:stopped}
	I0906 20:33:49.844159  765316 cri.go:135] skipping {4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844198  765316 cri.go:129] container: {ID:545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 Status:stopped}
	I0906 20:33:49.844223  765316 cri.go:135] skipping {545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844246  765316 cri.go:129] container: {ID:8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 Status:stopped}
	I0906 20:33:49.844279  765316 cri.go:135] skipping {8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844307  765316 cri.go:129] container: {ID:9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 Status:stopped}
	I0906 20:33:49.844329  765316 cri.go:135] skipping {9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844364  765316 cri.go:129] container: {ID:931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d Status:stopped}
	I0906 20:33:49.844390  765316 cri.go:135] skipping {931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844412  765316 cri.go:129] container: {ID:b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 Status:stopped}
	I0906 20:33:49.844446  765316 cri.go:135] skipping {b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844470  765316 cri.go:129] container: {ID:b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 Status:running}
	I0906 20:33:49.844491  765316 cri.go:135] skipping {b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 running}: state = "running", want "paused"
	I0906 20:33:49.844580  765316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:33:49.877572  765316 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0906 20:33:49.877650  765316 kubeadm.go:636] restartCluster start
	I0906 20:33:49.877747  765316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:33:49.891923  765316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:33:49.892613  765316 kubeconfig.go:92] found "pause-056574" server: "https://192.168.67.2:8443"
	I0906 20:33:49.894755  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:33:49.895932  765316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:33:49.938872  765316 api_server.go:166] Checking apiserver status ...
	I0906 20:33:49.938986  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:33:49.978295  765316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2608/cgroup
	I0906 20:33:50.021129  765316 api_server.go:182] apiserver freezer: "8:freezer:/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:50.021297  765316 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/freezer.state
	I0906 20:33:50.046744  765316 api_server.go:204] freezer state: "THAWED"
	I0906 20:33:50.046776  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:33:55.047189  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:33:55.047240  765316 retry.go:31] will retry after 310.26661ms: state is "Stopped"
	I0906 20:33:55.357625  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:00.358516  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:00.358575  765316 retry.go:31] will retry after 290.077348ms: state is "Stopped"
	I0906 20:34:00.648909  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:05.649272  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:05.649318  765316 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0906 20:34:05.649327  765316 kubeadm.go:1128] stopping kube-system containers ...
	I0906 20:34:05.649336  765316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:34:05.649402  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:34:05.713346  765316 cri.go:89] found id: "05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1"
	I0906 20:34:05.713365  765316 cri.go:89] found id: "bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc"
	I0906 20:34:05.713371  765316 cri.go:89] found id: "e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8"
	I0906 20:34:05.713375  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:34:05.713379  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:34:05.713384  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:34:05.713388  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:34:05.713393  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:34:05.713397  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:34:05.713404  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:34:05.713408  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:34:05.713412  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:34:05.713416  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:34:05.713420  765316 cri.go:89] found id: ""
	I0906 20:34:05.713425  765316 cri.go:234] Stopping containers: [05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1]
	I0906 20:34:05.713489  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:34:05.718682  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1
	I0906 20:34:13.132443  765316 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: (7.413722746s)
	W0906 20:34:13.132505  765316 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: Proce
ss exited with status 1
	stdout:
	05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1
	bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc
	e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8
	025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59
	34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087
	b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558
	8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0
	
	stderr:
	E0906 20:34:13.129283    2966 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist" containerID="545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	time="2023-09-06T20:34:13Z" level=fatal msg="stopping the container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist"
	I0906 20:34:13.132576  765316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:34:13.237172  765316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:34:13.248899  765316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 20:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 20:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep  6 20:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 20:32 /etc/kubernetes/scheduler.conf
	
	I0906 20:34:13.248965  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:34:13.260819  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:34:13.275196  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.289131  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.289202  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.303821  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:34:13.316447  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.316537  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:34:13.328173  765316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342566  765316 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342612  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:13.627415  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.615649  765316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.988198966s)
	I0906 20:34:15.615681  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.939965  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.035682  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.138021  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:16.138132  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.151245  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.666029  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.166182  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.217247  765316 api_server.go:72] duration metric: took 1.079224593s to wait for apiserver process to appear ...
	I0906 20:34:17.217269  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:17.217286  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217590  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.217618  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217783  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.718478  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:22.719312  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:22.719345  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.544399  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.544424  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.544437  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.595474  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.595500  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.718647  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.738658  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:25.738687  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.218140  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.240938  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.240976  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.718271  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.758413  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.758493  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:27.217931  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:27.229967  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:27.254936  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:27.254962  765316 api_server.go:131] duration metric: took 10.037686008s to wait for apiserver health ...
	I0906 20:34:27.254973  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:34:27.254980  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:34:27.257913  765316 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 20:34:27.259425  765316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:34:27.270904  765316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 20:34:27.270925  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:34:27.301759  765316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:34:28.771455  765316 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.469655953s)
	I0906 20:34:28.771484  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:28.781469  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:28.781560  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:28.781581  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:28.781622  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:28.781651  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:34:28.781678  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:34:28.781714  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:28.781738  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:28.781759  765316 system_pods.go:74] duration metric: took 10.26799ms to wait for pod list to return data ...
	I0906 20:34:28.781793  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:28.785373  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:28.785400  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:28.785414  765316 node_conditions.go:105] duration metric: took 3.596705ms to run NodePressure ...
	I0906 20:34:28.785430  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:29.038210  765316 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0906 20:34:29.045894  765316 kubeadm.go:787] kubelet initialised
	I0906 20:34:29.045967  765316 kubeadm.go:788] duration metric: took 7.729129ms waiting for restarted kubelet to initialise ...
	I0906 20:34:29.046002  765316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:29.055519  765316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064066  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.064139  765316 pod_ready.go:81] duration metric: took 8.528489ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064173  765316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072130  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.072197  765316 pod_ready.go:81] duration metric: took 8.003ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072242  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.182583  765316 pod_ready.go:102] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:31.683852  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:31.683912  765316 pod_ready.go:81] duration metric: took 2.61151632s waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.683950  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:33.986279  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:35.986616  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:36.984707  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.984730  765316 pod_ready.go:81] duration metric: took 5.300753851s waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.984741  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992075  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.992095  765316 pod_ready.go:81] duration metric: took 7.34737ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992106  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998875  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.998951  765316 pod_ready.go:81] duration metric: took 6.836216ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998978  765316 pod_ready.go:38] duration metric: took 7.952871853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:36.999027  765316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:34:37.011197  765316 ops.go:34] apiserver oom_adj: -16
	I0906 20:34:37.011286  765316 kubeadm.go:640] restartCluster took 47.133617068s
	I0906 20:34:37.011312  765316 kubeadm.go:406] StartCluster complete in 47.489035957s
	I0906 20:34:37.011366  765316 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.011473  765316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:34:37.012323  765316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.012655  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 20:34:37.013046  765316 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0906 20:34:37.015493  765316 out.go:177] * Enabled addons: 
	I0906 20:34:37.013584  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:34:37.014456  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:34:37.017943  765316 addons.go:502] enable addons completed in 4.894787ms: enabled=[]
	I0906 20:34:37.021942  765316 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-056574" context rescaled to 1 replicas
	I0906 20:34:37.022087  765316 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:34:37.024476  765316 out.go:177] * Verifying Kubernetes components...
	I0906 20:34:37.026589  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:37.172311  765316 start.go:880] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 20:34:37.172406  765316 node_ready.go:35] waiting up to 6m0s for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176294  765316 node_ready.go:49] node "pause-056574" has status "Ready":"True"
	I0906 20:34:37.176362  765316 node_ready.go:38] duration metric: took 3.892138ms waiting for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176384  765316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:37.184824  765316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575515  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.575540  765316 pod_ready.go:81] duration metric: took 390.640164ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575554  765316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979328  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.979401  765316 pod_ready.go:81] duration metric: took 403.838144ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979443  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375867  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.375943  765316 pod_ready.go:81] duration metric: took 396.457773ms waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375971  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775619  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.775689  765316 pod_ready.go:81] duration metric: took 399.696841ms waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775716  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177197  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.177227  765316 pod_ready.go:81] duration metric: took 401.490151ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177245  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575606  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.575627  765316 pod_ready.go:81] duration metric: took 398.373922ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575636  765316 pod_ready.go:38] duration metric: took 2.39922622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:39.575653  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:39.575726  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:39.590691  765316 api_server.go:72] duration metric: took 2.568547077s to wait for apiserver process to appear ...
	I0906 20:34:39.590711  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:39.590728  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:39.601600  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:39.603032  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:39.603103  765316 api_server.go:131] duration metric: took 12.384499ms to wait for apiserver health ...
	I0906 20:34:39.603126  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:39.779771  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:39.779863  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:39.779884  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:39.779922  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:39.779947  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:39.779976  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:39.780012  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:39.780035  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:39.780055  765316 system_pods.go:74] duration metric: took 176.911288ms to wait for pod list to return data ...
	I0906 20:34:39.780090  765316 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:34:39.977219  765316 default_sa.go:45] found service account: "default"
	I0906 20:34:39.977245  765316 default_sa.go:55] duration metric: took 197.126488ms for default service account to be created ...
	I0906 20:34:39.977254  765316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:34:40.182355  765316 system_pods.go:86] 7 kube-system pods found
	I0906 20:34:40.182442  765316 system_pods.go:89] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:40.182468  765316 system_pods.go:89] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:40.182510  765316 system_pods.go:89] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:40.182538  765316 system_pods.go:89] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:40.182563  765316 system_pods.go:89] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:40.182602  765316 system_pods.go:89] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:40.182631  765316 system_pods.go:89] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:40.182655  765316 system_pods.go:126] duration metric: took 205.395489ms to wait for k8s-apps to be running ...
	I0906 20:34:40.183486  765316 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:34:40.183592  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:40.203727  765316 system_svc.go:56] duration metric: took 20.227639ms WaitForService to wait for kubelet.
	I0906 20:34:40.204707  765316 kubeadm.go:581] duration metric: took 3.182559304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:34:40.206224  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:40.375845  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:40.375926  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:40.375951  765316 node_conditions.go:105] duration metric: took 169.705809ms to run NodePressure ...
	I0906 20:34:40.375990  765316 start.go:228] waiting for startup goroutines ...
	I0906 20:34:40.376013  765316 start.go:233] waiting for cluster config update ...
	I0906 20:34:40.376583  765316 start.go:242] writing updated cluster config ...
	I0906 20:34:40.377592  765316 ssh_runner.go:195] Run: rm -f paused
	I0906 20:34:40.470711  765316 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 20:34:40.474433  765316 out.go:177] * Done! kubectl is now configured to use "pause-056574" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-056574
helpers_test.go:235: (dbg) docker inspect pause-056574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb",
	        "Created": "2023-09-06T20:32:12.431317841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:32:12.881556944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/hosts",
	        "LogPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb-json.log",
	        "Name": "/pause-056574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-056574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-056574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd-init/diff:/var/lib/docker/overlay2/ba2e4d17dafea75bb4f24482e38d11907530383cc2bd79f5b12dd92aeb991448/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-056574",
	                "Source": "/var/lib/docker/volumes/pause-056574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-056574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-056574",
	                "name.minikube.sigs.k8s.io": "pause-056574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e160cefc40ea0dcadeb2cf327ee853a88ebaec39440447c0362d0b3a86f2774a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33567"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33566"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33563"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e160cefc40ea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-056574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bb332d83cfea",
	                        "pause-056574"
	                    ],
	                    "NetworkID": "e3500bda2ceb336e6887348cf9d9bf6470fa6504795c9cc68203c3575e6664ab",
	                    "EndpointID": "197e3fb51f6aabc61513f445dfe56076e747eaf5a1ef7d12579b70bd539647b1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-056574 -n pause-056574
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-056574 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-056574 logs -n 25: (1.751874545s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC | 06 Sep 23 20:30 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC | 06 Sep 23 20:31 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC | 06 Sep 23 20:31 UTC |
	| start   | -p insufficient-storage-500291 | insufficient-storage-500291 | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-500291 | insufficient-storage-500291 | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:32 UTC |
	| start   | -p pause-056574 --memory=2048  | pause-056574                | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:33 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:32 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:33 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063967 sudo    | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063967 sudo    | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p pause-056574                | pause-056574                | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:34 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 20:33:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:33:34.313466  765316 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:33:34.313689  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.313701  765316 out.go:309] Setting ErrFile to fd 2...
	I0906 20:33:34.313707  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.314132  765316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:33:34.314582  765316 out.go:303] Setting JSON to false
	I0906 20:33:34.315720  765316 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11569,"bootTime":1694020846,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:33:34.315790  765316 start.go:138] virtualization:  
	I0906 20:33:34.319340  765316 out.go:177] * [pause-056574] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:33:34.327361  765316 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:33:34.330841  765316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:33:34.327572  765316 notify.go:220] Checking for updates...
	I0906 20:33:34.335983  765316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:33:34.338698  765316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:33:34.340598  765316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:33:34.343246  765316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:33:34.345753  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:34.346369  765316 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:33:34.375935  765316 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:33:34.376035  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.563800  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.549453742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.563907  765316 docker.go:294] overlay module found
	I0906 20:33:34.567373  765316 out.go:177] * Using the docker driver based on existing profile
	I0906 20:33:34.569301  765316 start.go:298] selected driver: docker
	I0906 20:33:34.569317  765316 start.go:902] validating driver "docker" against &{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.569447  765316 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:33:34.569563  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.689940  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.677145194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.690392  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:34.690403  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:34.690414  765316 start_flags.go:321] config:
	{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.693526  765316 out.go:177] * Starting control plane node pause-056574 in cluster pause-056574
	I0906 20:33:34.695389  765316 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:33:34.697433  765316 out.go:177] * Pulling base image ...
	I0906 20:33:34.699855  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:34.700170  765316 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 20:33:34.700208  765316 cache.go:57] Caching tarball of preloaded images
	I0906 20:33:34.700304  765316 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 20:33:34.700314  765316 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 20:33:34.700426  765316 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:33:34.700814  765316 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/config.json ...
	I0906 20:33:34.732284  765316 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:33:34.732306  765316 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:33:34.732324  765316 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:33:34.732372  765316 start.go:365] acquiring machines lock for pause-056574: {Name:mk90a09ef8a87298b0c7a90b2424c10110e9aa4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:33:34.732448  765316 start.go:369] acquired machines lock for "pause-056574" in 50.027µs
	I0906 20:33:34.732479  765316 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:33:34.732489  765316 fix.go:54] fixHost starting: 
	I0906 20:33:34.732759  765316 cli_runner.go:164] Run: docker container inspect pause-056574 --format={{.State.Status}}
	I0906 20:33:34.750767  765316 fix.go:102] recreateIfNeeded on pause-056574: state=Running err=<nil>
	W0906 20:33:34.750796  765316 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:33:34.752565  765316 out.go:177] * Updating the running docker "pause-056574" container ...
	I0906 20:33:34.754518  765316 machine.go:88] provisioning docker machine ...
	I0906 20:33:34.754566  765316 ubuntu.go:169] provisioning hostname "pause-056574"
	I0906 20:33:34.754648  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.773035  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.773516  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.773535  765316 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056574 && echo "pause-056574" | sudo tee /etc/hostname
	I0906 20:33:34.929932  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056574
	
	I0906 20:33:34.930014  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.953757  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.954237  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.954262  765316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056574/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:33:35.103845  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:33:35.103880  765316 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:33:35.103902  765316 ubuntu.go:177] setting up certificates
	I0906 20:33:35.103931  765316 provision.go:83] configureAuth start
	I0906 20:33:35.104005  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:35.125515  765316 provision.go:138] copyHostCerts
	I0906 20:33:35.125587  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:33:35.125600  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:33:35.125675  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:33:35.125782  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:33:35.125792  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:33:35.125821  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:33:35.125893  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:33:35.125901  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:33:35.125930  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:33:35.125987  765316 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.pause-056574 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-056574]
	I0906 20:33:35.464852  765316 provision.go:172] copyRemoteCerts
	I0906 20:33:35.464921  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:33:35.464976  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.485046  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:35.588536  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:33:35.625299  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 20:33:35.663087  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:33:35.701708  765316 provision.go:86] duration metric: configureAuth took 597.759466ms
	I0906 20:33:35.701734  765316 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:33:35.701979  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:35.702146  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.741036  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:35.742277  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:35.742314  765316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:33:41.370326  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:33:41.370355  765316 machine.go:91] provisioned docker machine in 6.615822493s
	I0906 20:33:41.370371  765316 start.go:300] post-start starting for "pause-056574" (driver="docker")
	I0906 20:33:41.370382  765316 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:33:41.370454  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:33:41.370833  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.399017  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.505737  765316 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:33:41.513548  765316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:33:41.513586  765316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:33:41.513598  765316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:33:41.513605  765316 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:33:41.513616  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:33:41.513686  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:33:41.513786  765316 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:33:41.513902  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:33:41.525815  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:41.559398  765316 start.go:303] post-start completed in 189.009691ms
	I0906 20:33:41.559509  765316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:33:41.559561  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.578582  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.672501  765316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:33:41.679314  765316 fix.go:56] fixHost completed within 6.946815867s
	I0906 20:33:41.679340  765316 start.go:83] releasing machines lock for "pause-056574", held for 6.94688081s
	I0906 20:33:41.679452  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:41.700000  765316 ssh_runner.go:195] Run: cat /version.json
	I0906 20:33:41.700072  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.700343  765316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:33:41.700410  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.720050  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.730256  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.814929  765316 ssh_runner.go:195] Run: systemctl --version
	I0906 20:33:41.962994  765316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:33:42.130540  765316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:33:42.138170  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.151518  765316 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:33:42.151632  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.171316  765316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 20:33:42.171343  765316 start.go:466] detecting cgroup driver to use...
	I0906 20:33:42.171384  765316 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:33:42.171440  765316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:33:42.189434  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:33:42.207051  765316 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:33:42.207121  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:33:42.229017  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:33:42.247146  765316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:33:42.390753  765316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:33:42.517835  765316 docker.go:212] disabling docker service ...
	I0906 20:33:42.517907  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:33:42.533863  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:33:42.547730  765316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:33:42.677944  765316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:33:42.808903  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:33:42.822945  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:33:42.844218  765316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 20:33:42.844284  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.861002  765316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:33:42.861102  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.873532  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.885916  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.898763  765316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:33:42.910947  765316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:33:42.924563  765316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:33:42.936784  765316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:33:43.449022  765316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:33:46.252020  765316 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.802957999s)
	I0906 20:33:46.252075  765316 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:33:46.252129  765316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:33:46.270303  765316 start.go:534] Will wait 60s for crictl version
	I0906 20:33:46.270367  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:33:46.287200  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:33:46.390868  765316 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:33:46.390954  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.503270  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.585478  765316 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 20:33:46.587551  765316 cli_runner.go:164] Run: docker network inspect pause-056574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:33:46.612590  765316 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0906 20:33:46.621996  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:46.622086  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.679361  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.679387  765316 crio.go:415] Images already preloaded, skipping extraction
	I0906 20:33:46.679443  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.750400  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.750425  765316 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:33:46.750502  765316 ssh_runner.go:195] Run: crio config
	I0906 20:33:46.830543  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:46.830568  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:46.830591  765316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:33:46.830614  765316 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056574 NodeName:pause-056574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:33:46.830767  765316 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056574"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:33:46.830857  765316 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-056574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:33:46.830927  765316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 20:33:46.843205  765316 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:33:46.843294  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:33:46.854431  765316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0906 20:33:46.896420  765316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:33:46.934716  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0906 20:33:46.960797  765316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:33:46.966577  765316 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574 for IP: 192.168.67.2
	I0906 20:33:46.966617  765316 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:33:46.966754  765316 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:33:46.966796  765316 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:33:46.966880  765316 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key
	I0906 20:33:46.966941  765316 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key.c7fa3a9e
	I0906 20:33:46.966982  765316 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key
	I0906 20:33:46.967090  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:33:46.967119  765316 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:33:46.967129  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:33:46.967153  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:33:46.967182  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:33:46.967205  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:33:46.967253  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:46.967848  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 20:33:47.005937  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:33:47.051356  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:33:47.110191  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:33:47.555983  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:33:47.747732  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:33:47.961923  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:33:48.072519  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:33:48.289773  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:33:48.404448  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:33:48.532333  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:33:48.682016  765316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:33:48.774206  765316 ssh_runner.go:195] Run: openssl version
	I0906 20:33:48.810709  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:33:48.870390  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901746  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901812  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.939984  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:33:48.994584  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:33:49.046618  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061504  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061623  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.087874  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:33:49.139712  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:33:49.183818  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214747  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214885  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.248302  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:33:49.283927  765316 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:33:49.306853  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:33:49.342866  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:33:49.387558  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:33:49.420638  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:33:49.453227  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:33:49.486312  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:33:49.522285  765316 kubeadm.go:404] StartCluster: {Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:49.522473  765316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:33:49.522565  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:33:49.708977  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:33:49.709052  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:33:49.709072  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:49.709092  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:33:49.709127  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:33:49.709151  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:33:49.709172  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:33:49.709208  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:33:49.709228  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:33:49.709250  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:33:49.709284  765316 cri.go:89] found id: ""
	I0906 20:33:49.709369  765316 ssh_runner.go:195] Run: sudo runc list -f json
	I0906 20:33:49.843047  765316 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","pid":2643,"status":"running","bundle":"/run/containers/storage/overlay-containers/025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59/userdata","rootfs":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","created":"2023-09-06T20:33:47.754975222Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.40697841Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attemp
t\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"c
ontainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/08ab8438\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1/userdata","root
fs":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","created":"2023-09-06T20:32:33.117974564Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.931764147Z","io.kubernetes.cri-o.Imag
e":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/o
verlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/959d7a9b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","pid":2625,"status":"running","bundle":"/run/containers/storage/overlay-containers/34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087/userdata","rootfs":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","created":"2023-09-06T20:33:47.591827705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.res
tartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.259333074Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-
controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxNam
e":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/badcb97d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manage
r.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b71
2ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4/userdata","rootfs":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","created":"2023-09-06T20:33:00.586685495Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5b6be1","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5b6be1\",\"io.kubernetes.container.resta
rtCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:00.550150556Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw8hd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e90346fb-20dd-4265-8d3b-8f0a270025ce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw8hd_e90346fb-20dd-4265
-8d3b-8f0a270025ce/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/containers/kindnet-cni/e0d6d98f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/volumes/kubernetes.io~projected/kube-api-access-bvhwl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw8hd","io.kubernetes.pod.name
space":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e90346fb-20dd-4265-8d3b-8f0a270025ce","kubernetes.io/config.seen":"2023-09-06T20:32:58.685594541Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7/userdata","rootfs":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","created":"2023-09-06T20:33:31.702279839Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0a6b0f8","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.k
ubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0a6b0f8\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:31.638983323Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-5tvwb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d2358999-88bf-4ed4-b2ca-c2fb70773e36\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-5tvwb_d2358999-88bf-4ed4-b2ca-c2fb70773e36/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.ResolvPath":"/run/container
s/storage/overlay-containers/d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4
ed4-b2ca-c2fb70773e36/containers/coredns/8c896daa\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~projected/kube-api-access-v6dff\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-5tvwb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d2358999-88bf-4ed4-b2ca-c2fb70773e36","kubernetes.io/config.seen":"2023-09-06T20:33:31.246022801Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0/userdata","rootfs":"/var/lib/containers/storage/overlay/6373dce176da5954377857
8d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","created":"2023-09-06T20:33:43.556515142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dda786a5","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dda786a5\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:43.264099732Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io
.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"37fb3a22f6eccf83d612f100244ce554\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-056574_37fb3a22f6eccf83d612f100244ce554/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6373dce176da59543778578d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/21221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2
1221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/containers/etcd/cc2c9ec4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"37fb3a22f6eccf83d612f100244ce554","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"37fb3a22f6eccf83d612f100244ce554","kubernetes.io/config.seen":"2023-09-06T20:32:32.356228387Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1/userdata","rootfs":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","created":"2023-09-06T20:32:33.139041143Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name
":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.910534918Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"
io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kub
e-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/68b84355\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/931
d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d/userdata","rootfs":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","created":"2023-09-06T20:33:00.886032439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f7cf1de9","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f7cf1de9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.
cri-o.Created":"2023-09-06T20:33:00.832992491Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-mhjb5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2f662ac9-4819-4de1-a149-1427c9be35f4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-mhjb5_2f662ac9-4819-4de1-a149-1427c9be35f4/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.Resolv
Path":"/run/containers/storage/overlay-containers/dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/terminat
ion-log\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/containers/kube-proxy/0cfce1a2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~projected/kube-api-access-l7wqp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-mhjb5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2f662ac9-4819-4de1-a149-1427c9be35f4","kubernetes.io/config.seen":"2023-09-06T20:32:58.684519695Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0aa
8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3/userdata","rootfs":"/var/lib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","created":"2023-09-06T20:32:33.124641754Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.ku
bernetes.cri-o.ContainerID":"b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.953702159Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\
"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/ac34c3ca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-ce
rtificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","pid":2608,"status":"running","bundle":"/run/containers/storage/overlay-containers/b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/userdata","rootfs":"/var/lib/containers/storag
e/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","created":"2023-09-06T20:33:47.714197032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.251091707Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b98
41b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers
/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/ddbd8f57\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06
T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0906 20:33:49.843906  765316 cri.go:126] list returned 10 containers
	I0906 20:33:49.843958  765316 cri.go:129] container: {ID:025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 Status:running}
	I0906 20:33:49.843993  765316 cri.go:135] skipping {025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 running}: state = "running", want "paused"
	I0906 20:33:49.844030  765316 cri.go:129] container: {ID:1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 Status:stopped}
	I0906 20:33:49.844057  765316 cri.go:135] skipping {1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844079  765316 cri.go:129] container: {ID:34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 Status:running}
	I0906 20:33:49.844113  765316 cri.go:135] skipping {34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 running}: state = "running", want "paused"
	I0906 20:33:49.844137  765316 cri.go:129] container: {ID:4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 Status:stopped}
	I0906 20:33:49.844159  765316 cri.go:135] skipping {4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844198  765316 cri.go:129] container: {ID:545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 Status:stopped}
	I0906 20:33:49.844223  765316 cri.go:135] skipping {545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844246  765316 cri.go:129] container: {ID:8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 Status:stopped}
	I0906 20:33:49.844279  765316 cri.go:135] skipping {8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844307  765316 cri.go:129] container: {ID:9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 Status:stopped}
	I0906 20:33:49.844329  765316 cri.go:135] skipping {9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844364  765316 cri.go:129] container: {ID:931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d Status:stopped}
	I0906 20:33:49.844390  765316 cri.go:135] skipping {931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844412  765316 cri.go:129] container: {ID:b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 Status:stopped}
	I0906 20:33:49.844446  765316 cri.go:135] skipping {b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844470  765316 cri.go:129] container: {ID:b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 Status:running}
	I0906 20:33:49.844491  765316 cri.go:135] skipping {b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 running}: state = "running", want "paused"
	I0906 20:33:49.844580  765316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:33:49.877572  765316 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0906 20:33:49.877650  765316 kubeadm.go:636] restartCluster start
	I0906 20:33:49.877747  765316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:33:49.891923  765316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:33:49.892613  765316 kubeconfig.go:92] found "pause-056574" server: "https://192.168.67.2:8443"
	I0906 20:33:49.894755  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:33:49.895932  765316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:33:49.938872  765316 api_server.go:166] Checking apiserver status ...
	I0906 20:33:49.938986  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:33:49.978295  765316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2608/cgroup
	I0906 20:33:50.021129  765316 api_server.go:182] apiserver freezer: "8:freezer:/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:50.021297  765316 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/freezer.state
	I0906 20:33:50.046744  765316 api_server.go:204] freezer state: "THAWED"
	I0906 20:33:50.046776  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:33:55.047189  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:33:55.047240  765316 retry.go:31] will retry after 310.26661ms: state is "Stopped"
	I0906 20:33:55.357625  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:00.358516  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:00.358575  765316 retry.go:31] will retry after 290.077348ms: state is "Stopped"
	I0906 20:34:00.648909  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:05.649272  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:05.649318  765316 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0906 20:34:05.649327  765316 kubeadm.go:1128] stopping kube-system containers ...
	I0906 20:34:05.649336  765316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:34:05.649402  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:34:05.713346  765316 cri.go:89] found id: "05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1"
	I0906 20:34:05.713365  765316 cri.go:89] found id: "bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc"
	I0906 20:34:05.713371  765316 cri.go:89] found id: "e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8"
	I0906 20:34:05.713375  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:34:05.713379  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:34:05.713384  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:34:05.713388  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:34:05.713393  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:34:05.713397  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:34:05.713404  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:34:05.713408  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:34:05.713412  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:34:05.713416  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:34:05.713420  765316 cri.go:89] found id: ""
	I0906 20:34:05.713425  765316 cri.go:234] Stopping containers: [05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1]
	I0906 20:34:05.713489  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:34:05.718682  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1
	I0906 20:34:13.132443  765316 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: (7.413722746s)
	W0906 20:34:13.132505  765316 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: Proce
ss exited with status 1
	stdout:
	05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1
	bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc
	e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8
	025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59
	34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087
	b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558
	8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0
	
	stderr:
	E0906 20:34:13.129283    2966 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist" containerID="545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	time="2023-09-06T20:34:13Z" level=fatal msg="stopping the container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist"
	I0906 20:34:13.132576  765316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:34:13.237172  765316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:34:13.248899  765316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 20:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 20:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep  6 20:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 20:32 /etc/kubernetes/scheduler.conf
	
	I0906 20:34:13.248965  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:34:13.260819  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:34:13.275196  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.289131  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.289202  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.303821  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:34:13.316447  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.316537  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:34:13.328173  765316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342566  765316 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342612  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:13.627415  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.615649  765316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.988198966s)
	I0906 20:34:15.615681  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.939965  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.035682  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.138021  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:16.138132  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.151245  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.666029  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.166182  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.217247  765316 api_server.go:72] duration metric: took 1.079224593s to wait for apiserver process to appear ...
	I0906 20:34:17.217269  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:17.217286  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217590  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.217618  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217783  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.718478  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:22.719312  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:22.719345  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.544399  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.544424  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.544437  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.595474  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.595500  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.718647  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.738658  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:25.738687  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.218140  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.240938  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.240976  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.718271  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.758413  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.758493  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:27.217931  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:27.229967  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:27.254936  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:27.254962  765316 api_server.go:131] duration metric: took 10.037686008s to wait for apiserver health ...
	I0906 20:34:27.254973  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:34:27.254980  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:34:27.257913  765316 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 20:34:27.259425  765316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:34:27.270904  765316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 20:34:27.270925  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:34:27.301759  765316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:34:28.771455  765316 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.469655953s)
	I0906 20:34:28.771484  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:28.781469  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:28.781560  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:28.781581  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:28.781622  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:28.781651  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:34:28.781678  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:34:28.781714  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:28.781738  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:28.781759  765316 system_pods.go:74] duration metric: took 10.26799ms to wait for pod list to return data ...
	I0906 20:34:28.781793  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:28.785373  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:28.785400  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:28.785414  765316 node_conditions.go:105] duration metric: took 3.596705ms to run NodePressure ...
	I0906 20:34:28.785430  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:29.038210  765316 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0906 20:34:29.045894  765316 kubeadm.go:787] kubelet initialised
	I0906 20:34:29.045967  765316 kubeadm.go:788] duration metric: took 7.729129ms waiting for restarted kubelet to initialise ...
	I0906 20:34:29.046002  765316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:29.055519  765316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064066  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.064139  765316 pod_ready.go:81] duration metric: took 8.528489ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064173  765316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072130  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.072197  765316 pod_ready.go:81] duration metric: took 8.003ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072242  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.182583  765316 pod_ready.go:102] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:31.683852  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:31.683912  765316 pod_ready.go:81] duration metric: took 2.61151632s waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.683950  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:33.986279  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:35.986616  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:36.984707  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.984730  765316 pod_ready.go:81] duration metric: took 5.300753851s waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.984741  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992075  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.992095  765316 pod_ready.go:81] duration metric: took 7.34737ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992106  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998875  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.998951  765316 pod_ready.go:81] duration metric: took 6.836216ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998978  765316 pod_ready.go:38] duration metric: took 7.952871853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:36.999027  765316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:34:37.011197  765316 ops.go:34] apiserver oom_adj: -16
	I0906 20:34:37.011286  765316 kubeadm.go:640] restartCluster took 47.133617068s
	I0906 20:34:37.011312  765316 kubeadm.go:406] StartCluster complete in 47.489035957s
	I0906 20:34:37.011366  765316 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.011473  765316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:34:37.012323  765316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.012655  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 20:34:37.013046  765316 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0906 20:34:37.015493  765316 out.go:177] * Enabled addons: 
	I0906 20:34:37.013584  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:34:37.014456  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:34:37.017943  765316 addons.go:502] enable addons completed in 4.894787ms: enabled=[]
	I0906 20:34:37.021942  765316 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-056574" context rescaled to 1 replicas
	I0906 20:34:37.022087  765316 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:34:37.024476  765316 out.go:177] * Verifying Kubernetes components...
	I0906 20:34:37.026589  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:37.172311  765316 start.go:880] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 20:34:37.172406  765316 node_ready.go:35] waiting up to 6m0s for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176294  765316 node_ready.go:49] node "pause-056574" has status "Ready":"True"
	I0906 20:34:37.176362  765316 node_ready.go:38] duration metric: took 3.892138ms waiting for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176384  765316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:37.184824  765316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575515  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.575540  765316 pod_ready.go:81] duration metric: took 390.640164ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575554  765316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979328  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.979401  765316 pod_ready.go:81] duration metric: took 403.838144ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979443  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375867  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.375943  765316 pod_ready.go:81] duration metric: took 396.457773ms waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375971  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775619  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.775689  765316 pod_ready.go:81] duration metric: took 399.696841ms waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775716  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177197  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.177227  765316 pod_ready.go:81] duration metric: took 401.490151ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177245  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575606  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.575627  765316 pod_ready.go:81] duration metric: took 398.373922ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575636  765316 pod_ready.go:38] duration metric: took 2.39922622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:39.575653  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:39.575726  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:39.590691  765316 api_server.go:72] duration metric: took 2.568547077s to wait for apiserver process to appear ...
	I0906 20:34:39.590711  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:39.590728  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:39.601600  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:39.603032  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:39.603103  765316 api_server.go:131] duration metric: took 12.384499ms to wait for apiserver health ...
	I0906 20:34:39.603126  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:39.779771  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:39.779863  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:39.779884  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:39.779922  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:39.779947  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:39.779976  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:39.780012  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:39.780035  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:39.780055  765316 system_pods.go:74] duration metric: took 176.911288ms to wait for pod list to return data ...
	I0906 20:34:39.780090  765316 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:34:39.977219  765316 default_sa.go:45] found service account: "default"
	I0906 20:34:39.977245  765316 default_sa.go:55] duration metric: took 197.126488ms for default service account to be created ...
	I0906 20:34:39.977254  765316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:34:40.182355  765316 system_pods.go:86] 7 kube-system pods found
	I0906 20:34:40.182442  765316 system_pods.go:89] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:40.182468  765316 system_pods.go:89] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:40.182510  765316 system_pods.go:89] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:40.182538  765316 system_pods.go:89] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:40.182563  765316 system_pods.go:89] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:40.182602  765316 system_pods.go:89] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:40.182631  765316 system_pods.go:89] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:40.182655  765316 system_pods.go:126] duration metric: took 205.395489ms to wait for k8s-apps to be running ...
	I0906 20:34:40.183486  765316 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:34:40.183592  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:40.203727  765316 system_svc.go:56] duration metric: took 20.227639ms WaitForService to wait for kubelet.
	I0906 20:34:40.204707  765316 kubeadm.go:581] duration metric: took 3.182559304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:34:40.206224  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:40.375845  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:40.375926  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:40.375951  765316 node_conditions.go:105] duration metric: took 169.705809ms to run NodePressure ...
	I0906 20:34:40.375990  765316 start.go:228] waiting for startup goroutines ...
	I0906 20:34:40.376013  765316 start.go:233] waiting for cluster config update ...
	I0906 20:34:40.376583  765316 start.go:242] writing updated cluster config ...
	I0906 20:34:40.377592  765316 ssh_runner.go:195] Run: rm -f paused
	I0906 20:34:40.470711  765316 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 20:34:40.474433  765316 out.go:177] * Done! kubectl is now configured to use "pause-056574" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.556357482Z" level=info msg="Creating container: kube-system/kindnet-rw8hd/kindnet-cni" id=2d84b2f6-f2df-469c-b8e6-8cbb3e8216c4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.556401732Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.582364895Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c18407509c4bda0b11a9da84486987eef53b2f96e362fab1f2e71e380fcbfb4b/merged/etc/passwd: no such file or directory"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.582415168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c18407509c4bda0b11a9da84486987eef53b2f96e362fab1f2e71e380fcbfb4b/merged/etc/group: no such file or directory"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.758356135Z" level=info msg="Created container f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93: kube-system/coredns-5dd5756b68-5tvwb/coredns" id=b1b174ec-d79d-479a-a592-47f7be9aa72a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.764626510Z" level=info msg="Starting container: f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93" id=d513b22f-268e-49eb-88ad-7eda2b83d457 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.809168214Z" level=info msg="Started container" PID=3480 containerID=f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93 description=kube-system/coredns-5dd5756b68-5tvwb/coredns id=d513b22f-268e-49eb-88ad-7eda2b83d457 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.815288345Z" level=info msg="Created container 31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f: kube-system/kindnet-rw8hd/kindnet-cni" id=2d84b2f6-f2df-469c-b8e6-8cbb3e8216c4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.816075052Z" level=info msg="Starting container: 31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f" id=18c03046-b386-43fc-87f9-409314b901c4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.840660264Z" level=info msg="Started container" PID=3503 containerID=31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f description=kube-system/kindnet-rw8hd/kindnet-cni id=18c03046-b386-43fc-87f9-409314b901c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.871300122Z" level=info msg="Created container cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c: kube-system/kube-proxy-mhjb5/kube-proxy" id=9db0accd-0f94-4299-a266-ee37ba7c0ecd name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.871849151Z" level=info msg="Starting container: cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c" id=ae275483-d7c4-4314-bb50-70b2c483594b name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.898569293Z" level=info msg="Started container" PID=3501 containerID=cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c description=kube-system/kube-proxy-mhjb5/kube-proxy id=ae275483-d7c4-4314-bb50-70b2c483594b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.492613335Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542324616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542358150Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542373527Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.548964976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.548997041Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.549013566Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573747389Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573782401Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573800181Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.594685920Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.594730851Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cdc7daebdb837       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   15 seconds ago      Running             kube-proxy                2                   dc2a0c975464d       kube-proxy-mhjb5
	31b4e161a71bb       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   15 seconds ago      Running             kindnet-cni               2                   bb7982c6df4f0       kindnet-rw8hd
	f058a725708e0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   15 seconds ago      Running             coredns                   2                   d446768dbcd6e       coredns-5dd5756b68-5tvwb
	4c58ee65ee166       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   24 seconds ago      Running             kube-controller-manager   2                   ff36952b952c0       kube-controller-manager-pause-056574
	f045348129186       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   24 seconds ago      Running             kube-scheduler            2                   07072b4ff7729       kube-scheduler-pause-056574
	bc2c363583248       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   24 seconds ago      Running             kube-apiserver            2                   dea0c642ad445       kube-apiserver-pause-056574
	25eee559bbd70       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   35 seconds ago      Running             etcd                      2                   21221832e99b3       etcd-pause-056574
	05f54a6d8be03       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   46 seconds ago      Exited              kindnet-cni               1                   bb7982c6df4f0       kindnet-rw8hd
	bb742a60f04ad       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   46 seconds ago      Exited              kube-proxy                1                   dc2a0c975464d       kube-proxy-mhjb5
	e2eb1c64ed3cd       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   50 seconds ago      Exited              coredns                   1                   d446768dbcd6e       coredns-5dd5756b68-5tvwb
	025ca323c3897       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   54 seconds ago      Exited              kube-scheduler            1                   07072b4ff7729       kube-scheduler-pause-056574
	34b113d8d281b       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   54 seconds ago      Exited              kube-controller-manager   1                   ff36952b952c0       kube-controller-manager-pause-056574
	b79701e0a8b68       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   54 seconds ago      Exited              kube-apiserver            1                   dea0c642ad445       kube-apiserver-pause-056574
	8f78c0810b336       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   58 seconds ago      Exited              etcd                      1                   21221832e99b3       etcd-pause-056574
	
	* 
	* ==> coredns [e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36765 - 9933 "HINFO IN 7100186137038432082.6397341531603900995. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023604619s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43073 - 34093 "HINFO IN 8362203999672292090.7083563004841075398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014929013s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-056574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-056574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=pause-056574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T20_32_47_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 20:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-056574
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:33:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-056574
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 f44d9b4f437f4f04954598d8d2de3efa
	  System UUID:                a808912e-078d-4afe-9412-74f8bdb30571
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-5tvwb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     104s
	  kube-system                 etcd-pause-056574                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kindnet-rw8hd                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      104s
	  kube-system                 kube-apiserver-pause-056574             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-pause-056574    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-mhjb5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-056574             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                   kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                   kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                   kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                   node-controller  Node pause-056574 event: Registered Node pause-056574 in Controller
	  Normal  NodeReady                71s                    kubelet          Node pause-056574 status is now: NodeReady
	  Normal  Starting                 26s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)      kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)      kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x8 over 26s)      kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                     node-controller  Node pause-056574 event: Registered Node pause-056574 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001160] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +0.002380] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000f3c7fb8d
	[  +0.001073] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000829] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000050869d71
	[  +0.001077] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +2.999130] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000da17136c
	[  +0.001217] FS-Cache: O-key=[8] '95d3c90000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001133] FS-Cache: N-key=[8] '95d3c90000000000'
	[  +0.318024] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=000000003cc11187
	[  +0.001164] FS-Cache: O-key=[8] '9bd3c90000000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=00000000302c6dfe
	[  +0.001111] FS-Cache: N-key=[8] '9bd3c90000000000'
	
	* 
	* ==> etcd [25eee559bbd705e1cab1d36df6cf0fd3f2f4163d971ef2dab2230d7f093e9788] <==
	* {"level":"info","ts":"2023-09-06T20:34:06.279875Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:34:06.279888Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:34:06.281313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-09-06T20:34:06.281443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-09-06T20:34:06.281556Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:34:06.281584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:34:06.287014Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-06T20:34:06.287227Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-09-06T20:34:06.287636Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-09-06T20:34:06.289239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-06T20:34:06.287812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-06T20:34:07.850492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.850602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.850661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.8507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.85071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.850721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.850729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.851502Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-056574 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T20:34:07.851579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:34:07.852576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T20:34:07.852829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:34:07.853745Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-09-06T20:34:07.869282Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T20:34:07.869326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0] <==
	* 
	* 
	* ==> kernel <==
	*  20:34:42 up  3:13,  0 users,  load average: 4.11, 2.69, 2.07
	Linux pause-056574 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1] <==
	* I0906 20:33:55.222620       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0906 20:33:55.222918       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0906 20:33:55.223175       1 main.go:116] setting mtu 1500 for CNI 
	I0906 20:33:55.223220       1 main.go:146] kindnetd IP family: "ipv4"
	I0906 20:33:55.223257       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0906 20:34:05.436303       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	* 
	* ==> kindnet [31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f] <==
	* I0906 20:34:26.932078       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0906 20:34:26.937758       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0906 20:34:26.938019       1 main.go:116] setting mtu 1500 for CNI 
	I0906 20:34:26.938106       1 main.go:146] kindnetd IP family: "ipv4"
	I0906 20:34:26.938167       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0906 20:34:27.483779       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0906 20:34:27.492411       1 main.go:227] handling current node
	I0906 20:34:37.511668       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0906 20:34:37.511698       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558] <==
	* I0906 20:34:11.864727       1 controller.go:178] quota evaluator worker shutdown
	E0906 20:34:11.869199       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.869816       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.872076       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.873733       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.875207       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.876552       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.877554       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.878463       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.879220       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.881161       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.882805       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.884268       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.885719       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.887160       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.888588       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.890030       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.891492       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.892891       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.894304       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.895714       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.897111       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.898635       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.900043       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:12.487395       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-apiserver [bc2c363583248e74baf3152ec7827a5c0906f7c58cd3571705f045e6005ad033] <==
	* I0906 20:34:25.556727       1 controller.go:85] Starting OpenAPI V3 controller
	I0906 20:34:25.556748       1 naming_controller.go:291] Starting NamingConditionController
	I0906 20:34:25.556761       1 establishing_controller.go:76] Starting EstablishingController
	I0906 20:34:25.556774       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 20:34:25.556785       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 20:34:25.556796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 20:34:25.581961       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 20:34:25.595619       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0906 20:34:25.637192       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0906 20:34:25.637218       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0906 20:34:25.637292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 20:34:25.637729       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 20:34:25.638842       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 20:34:25.657851       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 20:34:25.657936       1 aggregator.go:166] initial CRD sync complete...
	I0906 20:34:25.657952       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 20:34:25.657958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 20:34:25.657965       1 cache.go:39] Caches are synced for autoregister controller
	I0906 20:34:25.688057       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 20:34:26.414943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 20:34:28.763282       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0906 20:34:28.921177       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 20:34:28.931644       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 20:34:29.014347       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 20:34:29.025740       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087] <==
	* I0906 20:33:51.078728       1 serving.go:348] Generated self-signed cert in-memory
	I0906 20:33:52.142636       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0906 20:33:52.142733       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:33:52.145887       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 20:33:52.145994       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 20:33:52.147816       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 20:33:52.147879       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [4c58ee65ee166b8edddca4c0d8d07994640c3c10601b2999aaf62240e14b387c] <==
	* I0906 20:34:38.218366       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0906 20:34:38.218414       1 taint_manager.go:211] "Sending events to api server"
	I0906 20:34:38.219084       1 event.go:307] "Event occurred" object="pause-056574" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-056574 event: Registered Node pause-056574 in Controller"
	I0906 20:34:38.231416       1 shared_informer.go:318] Caches are synced for PV protection
	I0906 20:34:38.240735       1 shared_informer.go:318] Caches are synced for daemon sets
	I0906 20:34:38.245790       1 shared_informer.go:318] Caches are synced for PVC protection
	I0906 20:34:38.250069       1 shared_informer.go:318] Caches are synced for disruption
	I0906 20:34:38.251285       1 shared_informer.go:318] Caches are synced for expand
	I0906 20:34:38.253599       1 shared_informer.go:318] Caches are synced for TTL
	I0906 20:34:38.260030       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0906 20:34:38.261173       1 shared_informer.go:318] Caches are synced for deployment
	I0906 20:34:38.265175       1 shared_informer.go:318] Caches are synced for persistent volume
	I0906 20:34:38.270327       1 shared_informer.go:318] Caches are synced for attach detach
	I0906 20:34:38.278128       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0906 20:34:38.293735       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 20:34:38.294946       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 20:34:38.302188       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 20:34:38.302332       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 20:34:38.302397       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 20:34:38.302448       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 20:34:38.319451       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0906 20:34:38.320290       1 shared_informer.go:318] Caches are synced for endpoint
	I0906 20:34:38.688533       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 20:34:38.706446       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 20:34:38.706632       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc] <==
	* I0906 20:33:55.478695       1 server_others.go:69] "Using iptables proxy"
	E0906 20:34:05.498276       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-056574": net/http: TLS handshake timeout
	
	* 
	* ==> kube-proxy [cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c] <==
	* I0906 20:34:27.151262       1 server_others.go:69] "Using iptables proxy"
	I0906 20:34:27.190887       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0906 20:34:27.300881       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 20:34:27.320664       1 server_others.go:152] "Using iptables Proxier"
	I0906 20:34:27.320778       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0906 20:34:27.320811       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0906 20:34:27.334219       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 20:34:27.334512       1 server.go:846] "Version info" version="v1.28.1"
	I0906 20:34:27.334545       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:27.340390       1 config.go:97] "Starting endpoint slice config controller"
	I0906 20:34:27.342405       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 20:34:27.342453       1 config.go:188] "Starting service config controller"
	I0906 20:34:27.342460       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 20:34:27.345715       1 config.go:315] "Starting node config controller"
	I0906 20:34:27.345805       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 20:34:27.493937       1 shared_informer.go:318] Caches are synced for node config
	I0906 20:34:27.517199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 20:34:27.517219       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59] <==
	* I0906 20:33:51.229739       1 serving.go:348] Generated self-signed cert in-memory
	W0906 20:34:02.949532       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0906 20:34:02.949573       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 20:34:02.949581       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 20:34:10.840666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 20:34:10.840709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:10.842596       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 20:34:10.842659       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:10.858553       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 20:34:10.858642       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 20:34:11.043914       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:11.351920       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 20:34:11.356144       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 20:34:11.356554       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f045348129186455483905503d377503efdbcdb70c102b147193f54d480f404e] <==
	* I0906 20:34:22.584175       1 serving.go:348] Generated self-signed cert in-memory
	I0906 20:34:25.975687       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 20:34:25.975792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:25.982006       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0906 20:34:25.982041       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0906 20:34:25.982219       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 20:34:25.982245       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:25.982428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0906 20:34:25.982440       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 20:34:25.987966       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 20:34:25.991942       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 20:34:26.082964       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 20:34:26.083119       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0906 20:34:26.083257       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.914359    3280 scope.go:117] "RemoveContainer" containerID="b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.915861    3280 scope.go:117] "RemoveContainer" containerID="34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.916386    3280 scope.go:117] "RemoveContainer" containerID="025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.955358    3280 kubelet_node_status.go:70] "Attempting to register node" node="pause-056574"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: E0906 20:34:16.955849    3280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="pause-056574"
	Sep 06 20:34:17 pause-056574 kubelet[3280]: W0906 20:34:17.086596    3280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Sep 06 20:34:17 pause-056574 kubelet[3280]: E0906 20:34:17.086674    3280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Sep 06 20:34:17 pause-056574 kubelet[3280]: I0906 20:34:17.757331    3280 kubelet_node_status.go:70] "Attempting to register node" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.665634    3280 kubelet_node_status.go:108] "Node was previously registered" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.665953    3280 kubelet_node_status.go:73] "Successfully registered node" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.673284    3280 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.680259    3280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.215844    3280 apiserver.go:52] "Watching apiserver"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.226030    3280 topology_manager.go:215] "Topology Admit Handler" podUID="e90346fb-20dd-4265-8d3b-8f0a270025ce" podNamespace="kube-system" podName="kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.228615    3280 topology_manager.go:215] "Topology Admit Handler" podUID="2f662ac9-4819-4de1-a149-1427c9be35f4" podNamespace="kube-system" podName="kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.228717    3280 topology_manager.go:215] "Topology Admit Handler" podUID="d2358999-88bf-4ed4-b2ca-c2fb70773e36" podNamespace="kube-system" podName="coredns-5dd5756b68-5tvwb"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.247609    3280 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308413    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f662ac9-4819-4de1-a149-1427c9be35f4-lib-modules\") pod \"kube-proxy-mhjb5\" (UID: \"2f662ac9-4819-4de1-a149-1427c9be35f4\") " pod="kube-system/kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308496    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-lib-modules\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308526    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-cni-cfg\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308584    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-xtables-lock\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308653    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f662ac9-4819-4de1-a149-1427c9be35f4-xtables-lock\") pod \"kube-proxy-mhjb5\" (UID: \"2f662ac9-4819-4de1-a149-1427c9be35f4\") " pod="kube-system/kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.534476    3280 scope.go:117] "RemoveContainer" containerID="bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.534857    3280 scope.go:117] "RemoveContainer" containerID="05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.539960    3280 scope.go:117] "RemoveContainer" containerID="e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-056574 -n pause-056574
helpers_test.go:261: (dbg) Run:  kubectl --context pause-056574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-056574
helpers_test.go:235: (dbg) docker inspect pause-056574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb",
	        "Created": "2023-09-06T20:32:12.431317841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 757519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-06T20:32:12.881556944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/hosts",
	        "LogPath": "/var/lib/docker/containers/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb-json.log",
	        "Name": "/pause-056574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-056574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-056574",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd-init/diff:/var/lib/docker/overlay2/ba2e4d17dafea75bb4f24482e38d11907530383cc2bd79f5b12dd92aeb991448/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9204c146baee94ae117bc8a82fe86f9f386eaeb73a0d4412ae43ca5292a689bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-056574",
	                "Source": "/var/lib/docker/volumes/pause-056574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-056574",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-056574",
	                "name.minikube.sigs.k8s.io": "pause-056574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e160cefc40ea0dcadeb2cf327ee853a88ebaec39440447c0362d0b3a86f2774a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33567"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33566"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33563"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e160cefc40ea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-056574": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bb332d83cfea",
	                        "pause-056574"
	                    ],
	                    "NetworkID": "e3500bda2ceb336e6887348cf9d9bf6470fa6504795c9cc68203c3575e6664ab",
	                    "EndpointID": "197e3fb51f6aabc61513f445dfe56076e747eaf5a1ef7d12579b70bd539647b1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-056574 -n pause-056574
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-056574 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-056574 logs -n 25: (2.606045411s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:30 UTC | 06 Sep 23 20:30 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC | 06 Sep 23 20:31 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-980317       | scheduled-stop-980317       | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC | 06 Sep 23 20:31 UTC |
	| start   | -p insufficient-storage-500291 | insufficient-storage-500291 | jenkins | v1.31.2 | 06 Sep 23 20:31 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-500291 | insufficient-storage-500291 | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:32 UTC |
	| start   | -p pause-056574 --memory=2048  | pause-056574                | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:33 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:32 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:32 UTC | 06 Sep 23 20:33 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063967 sudo    | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063967 sudo    | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063967         | NoKubernetes-063967         | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:33 UTC |
	| start   | -p pause-056574                | pause-056574                | jenkins | v1.31.2 | 06 Sep 23 20:33 UTC | 06 Sep 23 20:34 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 20:33:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:33:34.313466  765316 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:33:34.313689  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.313701  765316 out.go:309] Setting ErrFile to fd 2...
	I0906 20:33:34.313707  765316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:33:34.314132  765316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:33:34.314582  765316 out.go:303] Setting JSON to false
	I0906 20:33:34.315720  765316 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11569,"bootTime":1694020846,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:33:34.315790  765316 start.go:138] virtualization:  
	I0906 20:33:34.319340  765316 out.go:177] * [pause-056574] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:33:34.327361  765316 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:33:34.330841  765316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:33:34.327572  765316 notify.go:220] Checking for updates...
	I0906 20:33:34.335983  765316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:33:34.338698  765316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:33:34.340598  765316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:33:34.343246  765316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:33:34.345753  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:34.346369  765316 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:33:34.375935  765316 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:33:34.376035  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.563800  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.549453742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.563907  765316 docker.go:294] overlay module found
	I0906 20:33:34.567373  765316 out.go:177] * Using the docker driver based on existing profile
	I0906 20:33:34.569301  765316 start.go:298] selected driver: docker
	I0906 20:33:34.569317  765316 start.go:902] validating driver "docker" against &{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.569447  765316 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:33:34.569563  765316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:33:34.689940  765316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:33:34.677145194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:33:34.690392  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:34.690403  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:34.690414  765316 start_flags.go:321] config:
	{Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:34.693526  765316 out.go:177] * Starting control plane node pause-056574 in cluster pause-056574
	I0906 20:33:34.695389  765316 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:33:34.697433  765316 out.go:177] * Pulling base image ...
	I0906 20:33:34.699855  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:34.700170  765316 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 20:33:34.700208  765316 cache.go:57] Caching tarball of preloaded images
	I0906 20:33:34.700304  765316 preload.go:174] Found /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0906 20:33:34.700314  765316 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0906 20:33:34.700426  765316 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 20:33:34.700814  765316 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/config.json ...
	I0906 20:33:34.732284  765316 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0906 20:33:34.732306  765316 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0906 20:33:34.732324  765316 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:33:34.732372  765316 start.go:365] acquiring machines lock for pause-056574: {Name:mk90a09ef8a87298b0c7a90b2424c10110e9aa4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:33:34.732448  765316 start.go:369] acquired machines lock for "pause-056574" in 50.027µs
	I0906 20:33:34.732479  765316 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:33:34.732489  765316 fix.go:54] fixHost starting: 
	I0906 20:33:34.732759  765316 cli_runner.go:164] Run: docker container inspect pause-056574 --format={{.State.Status}}
	I0906 20:33:34.750767  765316 fix.go:102] recreateIfNeeded on pause-056574: state=Running err=<nil>
	W0906 20:33:34.750796  765316 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:33:34.752565  765316 out.go:177] * Updating the running docker "pause-056574" container ...
	I0906 20:33:34.754518  765316 machine.go:88] provisioning docker machine ...
	I0906 20:33:34.754566  765316 ubuntu.go:169] provisioning hostname "pause-056574"
	I0906 20:33:34.754648  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.773035  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.773516  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.773535  765316 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056574 && echo "pause-056574" | sudo tee /etc/hostname
	I0906 20:33:34.929932  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056574
	
	I0906 20:33:34.930014  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:34.953757  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:34.954237  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:34.954262  765316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056574/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:33:35.103845  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:33:35.103880  765316 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:33:35.103902  765316 ubuntu.go:177] setting up certificates
	I0906 20:33:35.103931  765316 provision.go:83] configureAuth start
	I0906 20:33:35.104005  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:35.125515  765316 provision.go:138] copyHostCerts
	I0906 20:33:35.125587  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:33:35.125600  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:33:35.125675  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:33:35.125782  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:33:35.125792  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:33:35.125821  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:33:35.125893  765316 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:33:35.125901  765316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:33:35.125930  765316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:33:35.125987  765316 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.pause-056574 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-056574]
	I0906 20:33:35.464852  765316 provision.go:172] copyRemoteCerts
	I0906 20:33:35.464921  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:33:35.464976  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.485046  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:35.588536  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:33:35.625299  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 20:33:35.663087  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:33:35.701708  765316 provision.go:86] duration metric: configureAuth took 597.759466ms
	I0906 20:33:35.701734  765316 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:33:35.701979  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:33:35.702146  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:35.741036  765316 main.go:141] libmachine: Using SSH client type: native
	I0906 20:33:35.742277  765316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33567 <nil> <nil>}
	I0906 20:33:35.742314  765316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:33:41.370326  765316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:33:41.370355  765316 machine.go:91] provisioned docker machine in 6.615822493s
	I0906 20:33:41.370371  765316 start.go:300] post-start starting for "pause-056574" (driver="docker")
	I0906 20:33:41.370382  765316 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:33:41.370454  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:33:41.370833  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.399017  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.505737  765316 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:33:41.513548  765316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:33:41.513586  765316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:33:41.513598  765316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:33:41.513605  765316 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0906 20:33:41.513616  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:33:41.513686  765316 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:33:41.513786  765316 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:33:41.513902  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:33:41.525815  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:41.559398  765316 start.go:303] post-start completed in 189.009691ms
	I0906 20:33:41.559509  765316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:33:41.559561  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.578582  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.672501  765316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:33:41.679314  765316 fix.go:56] fixHost completed within 6.946815867s
	I0906 20:33:41.679340  765316 start.go:83] releasing machines lock for "pause-056574", held for 6.94688081s
	I0906 20:33:41.679452  765316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-056574
	I0906 20:33:41.700000  765316 ssh_runner.go:195] Run: cat /version.json
	I0906 20:33:41.700072  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.700343  765316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:33:41.700410  765316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-056574
	I0906 20:33:41.720050  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.730256  765316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/pause-056574/id_rsa Username:docker}
	I0906 20:33:41.814929  765316 ssh_runner.go:195] Run: systemctl --version
	I0906 20:33:41.962994  765316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:33:42.130540  765316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:33:42.138170  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.151518  765316 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:33:42.151632  765316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:33:42.171316  765316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 20:33:42.171343  765316 start.go:466] detecting cgroup driver to use...
	I0906 20:33:42.171384  765316 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:33:42.171440  765316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:33:42.189434  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:33:42.207051  765316 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:33:42.207121  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:33:42.229017  765316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:33:42.247146  765316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:33:42.390753  765316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:33:42.517835  765316 docker.go:212] disabling docker service ...
	I0906 20:33:42.517907  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:33:42.533863  765316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:33:42.547730  765316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:33:42.677944  765316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:33:42.808903  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:33:42.822945  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:33:42.844218  765316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0906 20:33:42.844284  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.861002  765316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:33:42.861102  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.873532  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.885916  765316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:33:42.898763  765316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:33:42.910947  765316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:33:42.924563  765316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:33:42.936784  765316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:33:43.449022  765316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:33:46.252020  765316 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.802957999s)
	I0906 20:33:46.252075  765316 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:33:46.252129  765316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:33:46.270303  765316 start.go:534] Will wait 60s for crictl version
	I0906 20:33:46.270367  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:33:46.287200  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:33:46.390868  765316 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0906 20:33:46.390954  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.503270  765316 ssh_runner.go:195] Run: crio --version
	I0906 20:33:46.585478  765316 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0906 20:33:46.587551  765316 cli_runner.go:164] Run: docker network inspect pause-056574 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 20:33:46.612590  765316 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0906 20:33:46.621996  765316 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 20:33:46.622086  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.679361  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.679387  765316 crio.go:415] Images already preloaded, skipping extraction
	I0906 20:33:46.679443  765316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:33:46.750400  765316 crio.go:496] all images are preloaded for cri-o runtime.
	I0906 20:33:46.750425  765316 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:33:46.750502  765316 ssh_runner.go:195] Run: crio config
	I0906 20:33:46.830543  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:33:46.830568  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:33:46.830591  765316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 20:33:46.830614  765316 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056574 NodeName:pause-056574 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:33:46.830767  765316 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056574"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:33:46.830857  765316 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-056574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 20:33:46.830927  765316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 20:33:46.843205  765316 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:33:46.843294  765316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:33:46.854431  765316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0906 20:33:46.896420  765316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:33:46.934716  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0906 20:33:46.960797  765316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0906 20:33:46.966577  765316 certs.go:56] Setting up /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574 for IP: 192.168.67.2
	I0906 20:33:46.966617  765316 certs.go:190] acquiring lock for shared ca certs: {Name:mk5596cf7beb26b5b83b50e551aa70cf266830a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:33:46.966754  765316 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key
	I0906 20:33:46.966796  765316 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key
	I0906 20:33:46.966880  765316 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key
	I0906 20:33:46.966941  765316 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key.c7fa3a9e
	I0906 20:33:46.966982  765316 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key
	I0906 20:33:46.967090  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem (1338 bytes)
	W0906 20:33:46.967119  765316 certs.go:433] ignoring /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900_empty.pem, impossibly tiny 0 bytes
	I0906 20:33:46.967129  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:33:46.967153  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem (1082 bytes)
	I0906 20:33:46.967182  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:33:46.967205  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem (1679 bytes)
	I0906 20:33:46.967253  765316 certs.go:437] found cert: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:33:46.967848  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 20:33:47.005937  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:33:47.051356  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:33:47.110191  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:33:47.555983  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:33:47.747732  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 20:33:47.961923  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:33:48.072519  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:33:48.289773  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /usr/share/ca-certificates/6579002.pem (1708 bytes)
	I0906 20:33:48.404448  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:33:48.532333  765316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/657900.pem --> /usr/share/ca-certificates/657900.pem (1338 bytes)
	I0906 20:33:48.682016  765316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:33:48.774206  765316 ssh_runner.go:195] Run: openssl version
	I0906 20:33:48.810709  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:33:48.870390  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901746  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.901812  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:33:48.939984  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:33:48.994584  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/657900.pem && ln -fs /usr/share/ca-certificates/657900.pem /etc/ssl/certs/657900.pem"
	I0906 20:33:49.046618  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061504  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 20:04 /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.061623  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/657900.pem
	I0906 20:33:49.087874  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/657900.pem /etc/ssl/certs/51391683.0"
	I0906 20:33:49.139712  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6579002.pem && ln -fs /usr/share/ca-certificates/6579002.pem /etc/ssl/certs/6579002.pem"
	I0906 20:33:49.183818  765316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214747  765316 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 20:04 /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.214885  765316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6579002.pem
	I0906 20:33:49.248302  765316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6579002.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:33:49.283927  765316 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 20:33:49.306853  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:33:49.342866  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:33:49.387558  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:33:49.420638  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:33:49.453227  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:33:49.486312  765316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:33:49.522285  765316 kubeadm.go:404] StartCluster: {Name:pause-056574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-056574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:33:49.522473  765316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:33:49.522565  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:33:49.708977  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:33:49.709052  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:33:49.709072  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:49.709092  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:33:49.709127  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:33:49.709151  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:33:49.709172  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:33:49.709208  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:33:49.709228  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:33:49.709250  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:33:49.709284  765316 cri.go:89] found id: ""
	I0906 20:33:49.709369  765316 ssh_runner.go:195] Run: sudo runc list -f json
	I0906 20:33:49.843047  765316 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","pid":2643,"status":"running","bundle":"/run/containers/storage/overlay-containers/025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59/userdata","rootfs":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","created":"2023-09-06T20:33:47.754975222Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.40697841Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attemp
t\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9a0fdc0e84afe46fa465ef123b99f238cb7ab6df2d72c8365d9f1daf218965d8/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"c
ontainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/08ab8438\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1/userdata","root
fs":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","created":"2023-09-06T20:32:33.117974564Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.931764147Z","io.kubernetes.cri-o.Imag
e":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c6d2a7cab994123e8583d4411511571e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-056574_c6d2a7cab994123e8583d4411511571e/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d39b6a7ce71b0b0b4818a99d81020ebbb8fb26ea088a48aec8d6383ba9671ae/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/o
verlay-containers/07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"07072b4ff77295ed198bb1290d87689f5197d61e269ef62b0502747a402a5a05","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-056574_kube-system_c6d2a7cab994123e8583d4411511571e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c6d2a7cab994123e8583d4411511571e/containers/kube-scheduler/959d7a9b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.hash":"c6d2a7cab994123e8583d4411511571e","kubernetes.io/config.seen":"2023-09-06T20:32:32.356236838Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","pid":2625,"status":"running","bundle":"/run/containers/storage/overlay-containers/34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087/userdata","rootfs":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","created":"2023-09-06T20:33:47.591827705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.res
tartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.259333074Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-
controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/78c0240fabaa90c56a94d81e99fd3a2184693274f31def03c2def7e70a5c4e5b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxNam
e":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/badcb97d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manage
r.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b71
2ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4/userdata","rootfs":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","created":"2023-09-06T20:33:00.586685495Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5b6be1","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5b6be1\",\"io.kubernetes.container.resta
rtCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:00.550150556Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-rw8hd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e90346fb-20dd-4265-8d3b-8f0a270025ce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-rw8hd_e90346fb-20dd-4265
-8d3b-8f0a270025ce/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/09e08c1840b9ddc6b7abb7882334429a311dbd153747ace4e1eab0434302f582/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-rw8hd_kube-system_e90346fb-20dd-4265-8d3b-8f0a270025ce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/containers/kindnet-cni/e0d6d98f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e90346fb-20dd-4265-8d3b-8f0a270025ce/volumes/kubernetes.io~projected/kube-api-access-bvhwl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-rw8hd","io.kubernetes.pod.name
space":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e90346fb-20dd-4265-8d3b-8f0a270025ce","kubernetes.io/config.seen":"2023-09-06T20:32:58.685594541Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7/userdata","rootfs":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","created":"2023-09-06T20:33:31.702279839Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f0a6b0f8","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.k
ubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f0a6b0f8\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:31.638983323Z","io.kubernetes.cri-o.IP.0":"10
.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-5tvwb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d2358999-88bf-4ed4-b2ca-c2fb70773e36\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-5tvwb_d2358999-88bf-4ed4-b2ca-c2fb70773e36/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/49de6b079c2a491ab0497adb3974e73fece3417bc7b8451d518a41c4fb9cbcf8/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.ResolvPath":"/run/container
s/storage/overlay-containers/d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-5tvwb_kube-system_d2358999-88bf-4ed4-b2ca-c2fb70773e36_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4
ed4-b2ca-c2fb70773e36/containers/coredns/8c896daa\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d2358999-88bf-4ed4-b2ca-c2fb70773e36/volumes/kubernetes.io~projected/kube-api-access-v6dff\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-5tvwb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d2358999-88bf-4ed4-b2ca-c2fb70773e36","kubernetes.io/config.seen":"2023-09-06T20:33:31.246022801Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0/userdata","rootfs":"/var/lib/containers/storage/overlay/6373dce176da5954377857
8d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","created":"2023-09-06T20:33:43.556515142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dda786a5","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dda786a5\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:43.264099732Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io
.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"37fb3a22f6eccf83d612f100244ce554\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-056574_37fb3a22f6eccf83d612f100244ce554/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6373dce176da59543778578d6665461d3c2dc0d9933f1dfb468f5a4fd018ac3c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/21221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2
1221832e99b3e31cd6beb4d57d454fb31112ee01f4c8c0d66d54a580badde87","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-056574_kube-system_37fb3a22f6eccf83d612f100244ce554_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/37fb3a22f6eccf83d612f100244ce554/containers/etcd/cc2c9ec4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propag
ation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"37fb3a22f6eccf83d612f100244ce554","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"37fb3a22f6eccf83d612f100244ce554","kubernetes.io/config.seen":"2023-09-06T20:32:32.356228387Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1/userdata","rootfs":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","created":"2023-09-06T20:32:33.139041143Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name
":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.910534918Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"
io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0bc647f5fb26b350ddfa19494d30afb617a6eafa5c7da09827a2d89e9447c228/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kub
e-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/68b84355\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":tru
e,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/931
d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d/userdata","rootfs":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","created":"2023-09-06T20:33:00.886032439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f7cf1de9","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f7cf1de9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.
cri-o.Created":"2023-09-06T20:33:00.832992491Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-mhjb5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2f662ac9-4819-4de1-a149-1427c9be35f4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-mhjb5_2f662ac9-4819-4de1-a149-1427c9be35f4/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9150ef67f282b8509500346127c1e9d8e62082a39c1f46891b83b65ce6f9f60b/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.Resolv
Path":"/run/containers/storage/overlay-containers/dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-mhjb5_kube-system_2f662ac9-4819-4de1-a149-1427c9be35f4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/terminat
ion-log\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/containers/kube-proxy/0cfce1a2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2f662ac9-4819-4de1-a149-1427c9be35f4/volumes/kubernetes.io~projected/kube-api-access-l7wqp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-mhjb5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2f662ac9-4819-4de1-a149-1427c9be35f4","kubernetes.io/config.seen":"2023-09-06T20:32:58.684519695Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b0aa
8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3/userdata","rootfs":"/var/lib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","created":"2023-09-06T20:32:33.124641754Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.ku
bernetes.cri-o.ContainerID":"b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:32:32.953702159Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16b1e5bd06f3d89b712ef5511a1ff134\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-056574_16b1e5bd06f3d89b712ef5511a1ff134/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/617854d4035a3adccb6a613fdb235f483e73c817d8bc69ce8d9864bba04b8f05/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff36952b952c004bc87e21ab2ad4188764ee0c7ec492bc0b934dbdb303c0aae7","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-056574_kube-system_16b1e5bd06f3d89b712ef5511a1ff134_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\
"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/containers/kube-controller-manager/ac34c3ca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16b1e5bd06f3d89b712ef5511a1ff134/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-ce
rtificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.hash":"16b1e5bd06f3d89b712ef5511a1ff134","kubernetes.io/config.seen":"2023-09-06T20:32:32.356235886Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","pid":2608,"status":"running","bundle":"/run/containers/storage/overlay-containers/b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/userdata","rootfs":"/var/lib/containers/storag
e/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","created":"2023-09-06T20:33:47.714197032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c997f2bc","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c997f2bc\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-06T20:33:47.251091707Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b98
41b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-056574\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9eed4bbee484bdf886f9c44e782aff8a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-056574_9eed4bbee484bdf886f9c44e782aff8a/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/387ce735afb15105697157ef2c46f8ecb72840ff5d206382be8c6f32b6b7b959/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers
/dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dea0c642ad445c73376b0494852befa1f0f7ab3a490a469671e36b6039742ff7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-056574_kube-system_9eed4bbee484bdf886f9c44e782aff8a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/containers/kube-apiserver/ddbd8f57\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9eed4bbee484bdf886f9c44e782aff8a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-056574","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9eed4bbee484bdf886f9c44e782aff8a","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"9eed4bbee484bdf886f9c44e782aff8a","kubernetes.io/config.seen":"2023-09-06
T20:32:32.356234565Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0906 20:33:49.843906  765316 cri.go:126] list returned 10 containers
	I0906 20:33:49.843958  765316 cri.go:129] container: {ID:025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 Status:running}
	I0906 20:33:49.843993  765316 cri.go:135] skipping {025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 running}: state = "running", want "paused"
	I0906 20:33:49.844030  765316 cri.go:129] container: {ID:1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 Status:stopped}
	I0906 20:33:49.844057  765316 cri.go:135] skipping {1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844079  765316 cri.go:129] container: {ID:34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 Status:running}
	I0906 20:33:49.844113  765316 cri.go:135] skipping {34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 running}: state = "running", want "paused"
	I0906 20:33:49.844137  765316 cri.go:129] container: {ID:4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 Status:stopped}
	I0906 20:33:49.844159  765316 cri.go:135] skipping {4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844198  765316 cri.go:129] container: {ID:545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 Status:stopped}
	I0906 20:33:49.844223  765316 cri.go:135] skipping {545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844246  765316 cri.go:129] container: {ID:8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 Status:stopped}
	I0906 20:33:49.844279  765316 cri.go:135] skipping {8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844307  765316 cri.go:129] container: {ID:9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 Status:stopped}
	I0906 20:33:49.844329  765316 cri.go:135] skipping {9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844364  765316 cri.go:129] container: {ID:931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d Status:stopped}
	I0906 20:33:49.844390  765316 cri.go:135] skipping {931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844412  765316 cri.go:129] container: {ID:b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 Status:stopped}
	I0906 20:33:49.844446  765316 cri.go:135] skipping {b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 stopped}: state = "stopped", want "paused"
	I0906 20:33:49.844470  765316 cri.go:129] container: {ID:b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 Status:running}
	I0906 20:33:49.844491  765316 cri.go:135] skipping {b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 running}: state = "running", want "paused"
	I0906 20:33:49.844580  765316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:33:49.877572  765316 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0906 20:33:49.877650  765316 kubeadm.go:636] restartCluster start
	I0906 20:33:49.877747  765316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:33:49.891923  765316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:33:49.892613  765316 kubeconfig.go:92] found "pause-056574" server: "https://192.168.67.2:8443"
	I0906 20:33:49.894755  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:33:49.895932  765316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:33:49.938872  765316 api_server.go:166] Checking apiserver status ...
	I0906 20:33:49.938986  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:33:49.978295  765316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2608/cgroup
	I0906 20:33:50.021129  765316 api_server.go:182] apiserver freezer: "8:freezer:/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:33:50.021297  765316 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb332d83cfeaf7b6f46f8b947a0e17a184842508a616c44663f68d6ee29edddb/crio/crio-b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558/freezer.state
	I0906 20:33:50.046744  765316 api_server.go:204] freezer state: "THAWED"
	I0906 20:33:50.046776  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:33:55.047189  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:33:55.047240  765316 retry.go:31] will retry after 310.26661ms: state is "Stopped"
	I0906 20:33:55.357625  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:00.358516  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:00.358575  765316 retry.go:31] will retry after 290.077348ms: state is "Stopped"
	I0906 20:34:00.648909  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:05.649272  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:05.649318  765316 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0906 20:34:05.649327  765316 kubeadm.go:1128] stopping kube-system containers ...
	I0906 20:34:05.649336  765316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:34:05.649402  765316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:34:05.713346  765316 cri.go:89] found id: "05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1"
	I0906 20:34:05.713365  765316 cri.go:89] found id: "bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc"
	I0906 20:34:05.713371  765316 cri.go:89] found id: "e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8"
	I0906 20:34:05.713375  765316 cri.go:89] found id: "025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	I0906 20:34:05.713379  765316 cri.go:89] found id: "34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	I0906 20:34:05.713384  765316 cri.go:89] found id: "b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	I0906 20:34:05.713388  765316 cri.go:89] found id: "8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0"
	I0906 20:34:05.713393  765316 cri.go:89] found id: "545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	I0906 20:34:05.713397  765316 cri.go:89] found id: "931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d"
	I0906 20:34:05.713404  765316 cri.go:89] found id: "4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4"
	I0906 20:34:05.713408  765316 cri.go:89] found id: "b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3"
	I0906 20:34:05.713412  765316 cri.go:89] found id: "1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1"
	I0906 20:34:05.713416  765316 cri.go:89] found id: "9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1"
	I0906 20:34:05.713420  765316 cri.go:89] found id: ""
	I0906 20:34:05.713425  765316 cri.go:234] Stopping containers: [05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1]
	I0906 20:34:05.713489  765316 ssh_runner.go:195] Run: which crictl
	I0906 20:34:05.718682  765316 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1
	I0906 20:34:13.132443  765316 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: (7.413722746s)
	W0906 20:34:13.132505  765316 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1 bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8 025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59 34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087 b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558 8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 931d74a8b380b2c47308a969fa52b42ebcd52c3568e705d172685c0718cac33d 4665cd722e7380d4b8fe05b4045cb0bb5912e5b84eb514c7614688d38cd3bff4 b0aa8ad293b1166d9cc5556d767ed078b1ef0453f1d24ab147eb7ecfce73ecc3 1204f0888a26a5ba311913047c13ee6da9b0356ace122018de289a67b2a531e1 9174327ff747dab9a9ef50c68f4ce6bf5e6dc782b6c358dbcf5321ea989b7cb1: Proce
ss exited with status 1
	stdout:
	05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1
	bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc
	e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8
	025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59
	34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087
	b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558
	8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0
	
	stderr:
	E0906 20:34:13.129283    2966 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist" containerID="545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7"
	time="2023-09-06T20:34:13Z" level=fatal msg="stopping the container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": rpc error: code = NotFound desc = could not find container \"545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7\": container with ID starting with 545aa1a63d3e0282697161b62751aaaf3adddbf7dd9378dfdc6c17a5e588b4a7 not found: ID does not exist"
	I0906 20:34:13.132576  765316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:34:13.237172  765316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:34:13.248899  765316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 20:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep  6 20:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep  6 20:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep  6 20:32 /etc/kubernetes/scheduler.conf
	
	I0906 20:34:13.248965  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:34:13.260819  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:34:13.275196  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.289131  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.289202  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:34:13.303821  765316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:34:13.316447  765316 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:34:13.316537  765316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:34:13.328173  765316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342566  765316 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 20:34:13.342612  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:13.627415  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.615649  765316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.988198966s)
	I0906 20:34:15.615681  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:15.939965  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.035682  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:16.138021  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:16.138132  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.151245  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:16.666029  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.166182  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:17.217247  765316 api_server.go:72] duration metric: took 1.079224593s to wait for apiserver process to appear ...
	I0906 20:34:17.217269  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:17.217286  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217590  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.217618  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:17.217783  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0906 20:34:17.718478  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:22.719312  765316 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 20:34:22.719345  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.544399  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.544424  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.544437  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.595474  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:34:25.595500  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:34:25.718647  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:25.738658  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:25.738687  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.218140  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.240938  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.240976  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:26.718271  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:26.758413  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 20:34:26.758493  765316 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 20:34:27.217931  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:27.229967  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:27.254936  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:27.254962  765316 api_server.go:131] duration metric: took 10.037686008s to wait for apiserver health ...
	I0906 20:34:27.254973  765316 cni.go:84] Creating CNI manager for ""
	I0906 20:34:27.254980  765316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:34:27.257913  765316 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 20:34:27.259425  765316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 20:34:27.270904  765316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0906 20:34:27.270925  765316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0906 20:34:27.301759  765316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 20:34:28.771455  765316 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.469655953s)
	I0906 20:34:28.771484  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:28.781469  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:28.781560  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:28.781581  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:28.781622  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:28.781651  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:34:28.781678  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:34:28.781714  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:28.781738  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:28.781759  765316 system_pods.go:74] duration metric: took 10.26799ms to wait for pod list to return data ...
	I0906 20:34:28.781793  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:28.785373  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:28.785400  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:28.785414  765316 node_conditions.go:105] duration metric: took 3.596705ms to run NodePressure ...
	I0906 20:34:28.785430  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:34:29.038210  765316 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0906 20:34:29.045894  765316 kubeadm.go:787] kubelet initialised
	I0906 20:34:29.045967  765316 kubeadm.go:788] duration metric: took 7.729129ms waiting for restarted kubelet to initialise ...
	I0906 20:34:29.046002  765316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:29.055519  765316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064066  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.064139  765316 pod_ready.go:81] duration metric: took 8.528489ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.064173  765316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072130  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:29.072197  765316 pod_ready.go:81] duration metric: took 8.003ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:29.072242  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.182583  765316 pod_ready.go:102] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:31.683852  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:31.683912  765316 pod_ready.go:81] duration metric: took 2.61151632s waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:31.683950  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:33.986279  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:35.986616  765316 pod_ready.go:102] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"False"
	I0906 20:34:36.984707  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.984730  765316 pod_ready.go:81] duration metric: took 5.300753851s waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.984741  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992075  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.992095  765316 pod_ready.go:81] duration metric: took 7.34737ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.992106  765316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998875  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:36.998951  765316 pod_ready.go:81] duration metric: took 6.836216ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:36.998978  765316 pod_ready.go:38] duration metric: took 7.952871853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:36.999027  765316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:34:37.011197  765316 ops.go:34] apiserver oom_adj: -16
	I0906 20:34:37.011286  765316 kubeadm.go:640] restartCluster took 47.133617068s
	I0906 20:34:37.011312  765316 kubeadm.go:406] StartCluster complete in 47.489035957s
	I0906 20:34:37.011366  765316 settings.go:142] acquiring lock: {Name:mk0ee322179d939fb926f535c1408b304c5b8b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.011473  765316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:34:37.012323  765316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/kubeconfig: {Name:mkd5486ff1869e88b8977ac367495417356f4177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:34:37.012655  765316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 20:34:37.013046  765316 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0906 20:34:37.015493  765316 out.go:177] * Enabled addons: 
	I0906 20:34:37.013584  765316 config.go:182] Loaded profile config "pause-056574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:34:37.014456  765316 kapi.go:59] client config for pause-056574: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.crt", KeyFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/profiles/pause-056574/client.key", CAFile:"/home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x172c280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 20:34:37.017943  765316 addons.go:502] enable addons completed in 4.894787ms: enabled=[]
	I0906 20:34:37.021942  765316 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-056574" context rescaled to 1 replicas
	I0906 20:34:37.022087  765316 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:34:37.024476  765316 out.go:177] * Verifying Kubernetes components...
	I0906 20:34:37.026589  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:37.172311  765316 start.go:880] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 20:34:37.172406  765316 node_ready.go:35] waiting up to 6m0s for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176294  765316 node_ready.go:49] node "pause-056574" has status "Ready":"True"
	I0906 20:34:37.176362  765316 node_ready.go:38] duration metric: took 3.892138ms waiting for node "pause-056574" to be "Ready" ...
	I0906 20:34:37.176384  765316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:37.184824  765316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575515  765316 pod_ready.go:92] pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.575540  765316 pod_ready.go:81] duration metric: took 390.640164ms waiting for pod "coredns-5dd5756b68-5tvwb" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.575554  765316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979328  765316 pod_ready.go:92] pod "etcd-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:37.979401  765316 pod_ready.go:81] duration metric: took 403.838144ms waiting for pod "etcd-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:37.979443  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375867  765316 pod_ready.go:92] pod "kube-apiserver-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.375943  765316 pod_ready.go:81] duration metric: took 396.457773ms waiting for pod "kube-apiserver-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.375971  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775619  765316 pod_ready.go:92] pod "kube-controller-manager-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:38.775689  765316 pod_ready.go:81] duration metric: took 399.696841ms waiting for pod "kube-controller-manager-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:38.775716  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177197  765316 pod_ready.go:92] pod "kube-proxy-mhjb5" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.177227  765316 pod_ready.go:81] duration metric: took 401.490151ms waiting for pod "kube-proxy-mhjb5" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.177245  765316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575606  765316 pod_ready.go:92] pod "kube-scheduler-pause-056574" in "kube-system" namespace has status "Ready":"True"
	I0906 20:34:39.575627  765316 pod_ready.go:81] duration metric: took 398.373922ms waiting for pod "kube-scheduler-pause-056574" in "kube-system" namespace to be "Ready" ...
	I0906 20:34:39.575636  765316 pod_ready.go:38] duration metric: took 2.39922622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:34:39.575653  765316 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:34:39.575726  765316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:34:39.590691  765316 api_server.go:72] duration metric: took 2.568547077s to wait for apiserver process to appear ...
	I0906 20:34:39.590711  765316 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:34:39.590728  765316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 20:34:39.601600  765316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 20:34:39.603032  765316 api_server.go:141] control plane version: v1.28.1
	I0906 20:34:39.603103  765316 api_server.go:131] duration metric: took 12.384499ms to wait for apiserver health ...
	I0906 20:34:39.603126  765316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:34:39.779771  765316 system_pods.go:59] 7 kube-system pods found
	I0906 20:34:39.779863  765316 system_pods.go:61] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:39.779884  765316 system_pods.go:61] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:39.779922  765316 system_pods.go:61] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:39.779947  765316 system_pods.go:61] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:39.779976  765316 system_pods.go:61] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:39.780012  765316 system_pods.go:61] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:39.780035  765316 system_pods.go:61] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:39.780055  765316 system_pods.go:74] duration metric: took 176.911288ms to wait for pod list to return data ...
	I0906 20:34:39.780090  765316 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:34:39.977219  765316 default_sa.go:45] found service account: "default"
	I0906 20:34:39.977245  765316 default_sa.go:55] duration metric: took 197.126488ms for default service account to be created ...
	I0906 20:34:39.977254  765316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:34:40.182355  765316 system_pods.go:86] 7 kube-system pods found
	I0906 20:34:40.182442  765316 system_pods.go:89] "coredns-5dd5756b68-5tvwb" [d2358999-88bf-4ed4-b2ca-c2fb70773e36] Running
	I0906 20:34:40.182468  765316 system_pods.go:89] "etcd-pause-056574" [03cf1bfc-ead0-422a-96d5-db71d30b7fa3] Running
	I0906 20:34:40.182510  765316 system_pods.go:89] "kindnet-rw8hd" [e90346fb-20dd-4265-8d3b-8f0a270025ce] Running
	I0906 20:34:40.182538  765316 system_pods.go:89] "kube-apiserver-pause-056574" [f4ad611e-3361-4424-9530-040ba395f734] Running
	I0906 20:34:40.182563  765316 system_pods.go:89] "kube-controller-manager-pause-056574" [ca30e49f-b35a-4884-bedd-3a64973b3e79] Running
	I0906 20:34:40.182602  765316 system_pods.go:89] "kube-proxy-mhjb5" [2f662ac9-4819-4de1-a149-1427c9be35f4] Running
	I0906 20:34:40.182631  765316 system_pods.go:89] "kube-scheduler-pause-056574" [9cf0da5e-1288-4a48-bce8-e02d8754c94c] Running
	I0906 20:34:40.182655  765316 system_pods.go:126] duration metric: took 205.395489ms to wait for k8s-apps to be running ...
	I0906 20:34:40.183486  765316 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:34:40.183592  765316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:34:40.203727  765316 system_svc.go:56] duration metric: took 20.227639ms WaitForService to wait for kubelet.
	I0906 20:34:40.204707  765316 kubeadm.go:581] duration metric: took 3.182559304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 20:34:40.206224  765316 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:34:40.375845  765316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 20:34:40.375926  765316 node_conditions.go:123] node cpu capacity is 2
	I0906 20:34:40.375951  765316 node_conditions.go:105] duration metric: took 169.705809ms to run NodePressure ...
	I0906 20:34:40.375990  765316 start.go:228] waiting for startup goroutines ...
	I0906 20:34:40.376013  765316 start.go:233] waiting for cluster config update ...
	I0906 20:34:40.376583  765316 start.go:242] writing updated cluster config ...
	I0906 20:34:40.377592  765316 ssh_runner.go:195] Run: rm -f paused
	I0906 20:34:40.470711  765316 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 20:34:40.474433  765316 out.go:177] * Done! kubectl is now configured to use "pause-056574" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.556357482Z" level=info msg="Creating container: kube-system/kindnet-rw8hd/kindnet-cni" id=2d84b2f6-f2df-469c-b8e6-8cbb3e8216c4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.556401732Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.582364895Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c18407509c4bda0b11a9da84486987eef53b2f96e362fab1f2e71e380fcbfb4b/merged/etc/passwd: no such file or directory"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.582415168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c18407509c4bda0b11a9da84486987eef53b2f96e362fab1f2e71e380fcbfb4b/merged/etc/group: no such file or directory"
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.758356135Z" level=info msg="Created container f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93: kube-system/coredns-5dd5756b68-5tvwb/coredns" id=b1b174ec-d79d-479a-a592-47f7be9aa72a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.764626510Z" level=info msg="Starting container: f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93" id=d513b22f-268e-49eb-88ad-7eda2b83d457 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.809168214Z" level=info msg="Started container" PID=3480 containerID=f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93 description=kube-system/coredns-5dd5756b68-5tvwb/coredns id=d513b22f-268e-49eb-88ad-7eda2b83d457 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d446768dbcd6e7973cdd3f1e55bcfad6d797985bb6b132644d4e2b88258a3eb3
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.815288345Z" level=info msg="Created container 31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f: kube-system/kindnet-rw8hd/kindnet-cni" id=2d84b2f6-f2df-469c-b8e6-8cbb3e8216c4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.816075052Z" level=info msg="Starting container: 31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f" id=18c03046-b386-43fc-87f9-409314b901c4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.840660264Z" level=info msg="Started container" PID=3503 containerID=31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f description=kube-system/kindnet-rw8hd/kindnet-cni id=18c03046-b386-43fc-87f9-409314b901c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb7982c6df4f0bbd6b02cdca8427fba6fe97e6154887c4d548449995a73fca8d
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.871300122Z" level=info msg="Created container cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c: kube-system/kube-proxy-mhjb5/kube-proxy" id=9db0accd-0f94-4299-a266-ee37ba7c0ecd name=/runtime.v1.RuntimeService/CreateContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.871849151Z" level=info msg="Starting container: cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c" id=ae275483-d7c4-4314-bb50-70b2c483594b name=/runtime.v1.RuntimeService/StartContainer
	Sep 06 20:34:26 pause-056574 crio[2470]: time="2023-09-06 20:34:26.898569293Z" level=info msg="Started container" PID=3501 containerID=cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c description=kube-system/kube-proxy-mhjb5/kube-proxy id=ae275483-d7c4-4314-bb50-70b2c483594b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc2a0c975464dc25e7bfefc575d08a0a3618933283327721de1d249ce091b30f
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.492613335Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542324616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542358150Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.542373527Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.548964976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.548997041Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.549013566Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573747389Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573782401Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.573800181Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.594685920Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 06 20:34:27 pause-056574 crio[2470]: time="2023-09-06 20:34:27.594730851Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cdc7daebdb837       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   18 seconds ago       Running             kube-proxy                2                   dc2a0c975464d       kube-proxy-mhjb5
	31b4e161a71bb       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   18 seconds ago       Running             kindnet-cni               2                   bb7982c6df4f0       kindnet-rw8hd
	f058a725708e0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   18 seconds ago       Running             coredns                   2                   d446768dbcd6e       coredns-5dd5756b68-5tvwb
	4c58ee65ee166       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   27 seconds ago       Running             kube-controller-manager   2                   ff36952b952c0       kube-controller-manager-pause-056574
	f045348129186       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   27 seconds ago       Running             kube-scheduler            2                   07072b4ff7729       kube-scheduler-pause-056574
	bc2c363583248       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   27 seconds ago       Running             kube-apiserver            2                   dea0c642ad445       kube-apiserver-pause-056574
	25eee559bbd70       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   38 seconds ago       Running             etcd                      2                   21221832e99b3       etcd-pause-056574
	05f54a6d8be03       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   49 seconds ago       Exited              kindnet-cni               1                   bb7982c6df4f0       kindnet-rw8hd
	bb742a60f04ad       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26   49 seconds ago       Exited              kube-proxy                1                   dc2a0c975464d       kube-proxy-mhjb5
	e2eb1c64ed3cd       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   53 seconds ago       Exited              coredns                   1                   d446768dbcd6e       coredns-5dd5756b68-5tvwb
	025ca323c3897       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87   57 seconds ago       Exited              kube-scheduler            1                   07072b4ff7729       kube-scheduler-pause-056574
	34b113d8d281b       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965   57 seconds ago       Exited              kube-controller-manager   1                   ff36952b952c0       kube-controller-manager-pause-056574
	b79701e0a8b68       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a   57 seconds ago       Exited              kube-apiserver            1                   dea0c642ad445       kube-apiserver-pause-056574
	8f78c0810b336       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   About a minute ago   Exited              etcd                      1                   21221832e99b3       etcd-pause-056574
	
	* 
	* ==> coredns [e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36765 - 9933 "HINFO IN 7100186137038432082.6397341531603900995. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023604619s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f058a725708e00e3d17eb424bbd3173c87c0b4944cf54886b56e5c7478dc5d93] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43073 - 34093 "HINFO IN 8362203999672292090.7083563004841075398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014929013s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-056574
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-056574
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=pause-056574
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T20_32_47_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 20:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-056574
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 20:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 20:34:25 +0000   Wed, 06 Sep 2023 20:33:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-056574
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 f44d9b4f437f4f04954598d8d2de3efa
	  System UUID:                a808912e-078d-4afe-9412-74f8bdb30571
	  Boot ID:                    d5624a78-31f3-41c0-a03f-adfa6e3f71eb
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-5tvwb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     107s
	  kube-system                 etcd-pause-056574                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kindnet-rw8hd                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      107s
	  kube-system                 kube-apiserver-pause-056574             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-pause-056574    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-proxy-mhjb5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-pause-056574             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 104s                   kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                     kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                     kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                     kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                   node-controller  Node pause-056574 event: Registered Node pause-056574 in Controller
	  Normal  NodeReady                74s                    kubelet          Node pause-056574 status is now: NodeReady
	  Normal  Starting                 29s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)      kubelet          Node pause-056574 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)      kubelet          Node pause-056574 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x8 over 29s)      kubelet          Node pause-056574 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                     node-controller  Node pause-056574 event: Registered Node pause-056574 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001160] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +0.002380] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000991] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000f3c7fb8d
	[  +0.001073] FS-Cache: O-key=[8] '96d3c90000000000'
	[  +0.000829] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=0000000050869d71
	[  +0.001077] FS-Cache: N-key=[8] '96d3c90000000000'
	[  +2.999130] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=00000000da17136c
	[  +0.001217] FS-Cache: O-key=[8] '95d3c90000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=000000002b2f1a65
	[  +0.001133] FS-Cache: N-key=[8] '95d3c90000000000'
	[  +0.318024] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a39b565b{9p.inode} n=000000003cc11187
	[  +0.001164] FS-Cache: O-key=[8] '9bd3c90000000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=00000000a39b565b{9p.inode} n=00000000302c6dfe
	[  +0.001111] FS-Cache: N-key=[8] '9bd3c90000000000'
	
	* 
	* ==> etcd [25eee559bbd705e1cab1d36df6cf0fd3f2f4163d971ef2dab2230d7f093e9788] <==
	* {"level":"info","ts":"2023-09-06T20:34:06.279875Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:34:06.279888Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T20:34:06.281313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-09-06T20:34:06.281443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-09-06T20:34:06.281556Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:34:06.281584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T20:34:06.287014Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-06T20:34:06.287227Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-09-06T20:34:06.287636Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-09-06T20:34:06.289239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-06T20:34:06.287812Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-06T20:34:07.850492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.850602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.850661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-09-06T20:34:07.8507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.85071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.850721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.850729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-09-06T20:34:07.851502Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-056574 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T20:34:07.851579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:34:07.852576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T20:34:07.852829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T20:34:07.853745Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-09-06T20:34:07.869282Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T20:34:07.869326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [8f78c0810b336d09a6b687dfcf741a45f7add290974fd871d8748bbcaf37ddf0] <==
	* 
	* 
	* ==> kernel <==
	*  20:34:45 up  3:13,  0 users,  load average: 4.34, 2.77, 2.09
	Linux pause-056574 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1] <==
	* I0906 20:33:55.222620       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0906 20:33:55.222918       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0906 20:33:55.223175       1 main.go:116] setting mtu 1500 for CNI 
	I0906 20:33:55.223220       1 main.go:146] kindnetd IP family: "ipv4"
	I0906 20:33:55.223257       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0906 20:34:05.436303       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	* 
	* ==> kindnet [31b4e161a71bbe6accf806d5653f5daee80c433ee25a0a2046e707ad006d968f] <==
	* I0906 20:34:26.932078       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0906 20:34:26.937758       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0906 20:34:26.938019       1 main.go:116] setting mtu 1500 for CNI 
	I0906 20:34:26.938106       1 main.go:146] kindnetd IP family: "ipv4"
	I0906 20:34:26.938167       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0906 20:34:27.483779       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0906 20:34:27.492411       1 main.go:227] handling current node
	I0906 20:34:37.511668       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0906 20:34:37.511698       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558] <==
	* I0906 20:34:11.864727       1 controller.go:178] quota evaluator worker shutdown
	E0906 20:34:11.869199       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.869816       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.872076       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.873733       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.875207       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.876552       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.877554       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.878463       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.879220       1 storage_rbac.go:264] unable to reconcile clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.881161       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.882805       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.884268       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.885719       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.887160       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.888588       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.890030       1 storage_rbac.go:295] unable to reconcile role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.891492       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.892891       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.894304       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.895714       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.897111       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.898635       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:11.900043       1 storage_rbac.go:329] unable to reconcile rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer": dial tcp 127.0.0.1:8443: connect: connection refused
	E0906 20:34:12.487395       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	* 
	* ==> kube-apiserver [bc2c363583248e74baf3152ec7827a5c0906f7c58cd3571705f045e6005ad033] <==
	* I0906 20:34:25.556727       1 controller.go:85] Starting OpenAPI V3 controller
	I0906 20:34:25.556748       1 naming_controller.go:291] Starting NamingConditionController
	I0906 20:34:25.556761       1 establishing_controller.go:76] Starting EstablishingController
	I0906 20:34:25.556774       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0906 20:34:25.556785       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0906 20:34:25.556796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0906 20:34:25.581961       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 20:34:25.595619       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0906 20:34:25.637192       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0906 20:34:25.637218       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0906 20:34:25.637292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 20:34:25.637729       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 20:34:25.638842       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 20:34:25.657851       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 20:34:25.657936       1 aggregator.go:166] initial CRD sync complete...
	I0906 20:34:25.657952       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 20:34:25.657958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 20:34:25.657965       1 cache.go:39] Caches are synced for autoregister controller
	I0906 20:34:25.688057       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 20:34:26.414943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 20:34:28.763282       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0906 20:34:28.921177       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 20:34:28.931644       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 20:34:29.014347       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 20:34:29.025740       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087] <==
	* I0906 20:33:51.078728       1 serving.go:348] Generated self-signed cert in-memory
	I0906 20:33:52.142636       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0906 20:33:52.142733       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:33:52.145887       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 20:33:52.145994       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 20:33:52.147816       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 20:33:52.147879       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [4c58ee65ee166b8edddca4c0d8d07994640c3c10601b2999aaf62240e14b387c] <==
	* I0906 20:34:38.218366       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0906 20:34:38.218414       1 taint_manager.go:211] "Sending events to api server"
	I0906 20:34:38.219084       1 event.go:307] "Event occurred" object="pause-056574" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-056574 event: Registered Node pause-056574 in Controller"
	I0906 20:34:38.231416       1 shared_informer.go:318] Caches are synced for PV protection
	I0906 20:34:38.240735       1 shared_informer.go:318] Caches are synced for daemon sets
	I0906 20:34:38.245790       1 shared_informer.go:318] Caches are synced for PVC protection
	I0906 20:34:38.250069       1 shared_informer.go:318] Caches are synced for disruption
	I0906 20:34:38.251285       1 shared_informer.go:318] Caches are synced for expand
	I0906 20:34:38.253599       1 shared_informer.go:318] Caches are synced for TTL
	I0906 20:34:38.260030       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0906 20:34:38.261173       1 shared_informer.go:318] Caches are synced for deployment
	I0906 20:34:38.265175       1 shared_informer.go:318] Caches are synced for persistent volume
	I0906 20:34:38.270327       1 shared_informer.go:318] Caches are synced for attach detach
	I0906 20:34:38.278128       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0906 20:34:38.293735       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 20:34:38.294946       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 20:34:38.302188       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 20:34:38.302332       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 20:34:38.302397       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 20:34:38.302448       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 20:34:38.319451       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0906 20:34:38.320290       1 shared_informer.go:318] Caches are synced for endpoint
	I0906 20:34:38.688533       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 20:34:38.706446       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 20:34:38.706632       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc] <==
	* I0906 20:33:55.478695       1 server_others.go:69] "Using iptables proxy"
	E0906 20:34:05.498276       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-056574": net/http: TLS handshake timeout
	
	* 
	* ==> kube-proxy [cdc7daebdb837dc5d6897ebc0fd7d4f64805a146b9718f411ae21639376a364c] <==
	* I0906 20:34:27.151262       1 server_others.go:69] "Using iptables proxy"
	I0906 20:34:27.190887       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0906 20:34:27.300881       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 20:34:27.320664       1 server_others.go:152] "Using iptables Proxier"
	I0906 20:34:27.320778       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0906 20:34:27.320811       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0906 20:34:27.334219       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 20:34:27.334512       1 server.go:846] "Version info" version="v1.28.1"
	I0906 20:34:27.334545       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:27.340390       1 config.go:97] "Starting endpoint slice config controller"
	I0906 20:34:27.342405       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 20:34:27.342453       1 config.go:188] "Starting service config controller"
	I0906 20:34:27.342460       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 20:34:27.345715       1 config.go:315] "Starting node config controller"
	I0906 20:34:27.345805       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 20:34:27.493937       1 shared_informer.go:318] Caches are synced for node config
	I0906 20:34:27.517199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 20:34:27.517219       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59] <==
	* I0906 20:33:51.229739       1 serving.go:348] Generated self-signed cert in-memory
	W0906 20:34:02.949532       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0906 20:34:02.949573       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 20:34:02.949581       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 20:34:10.840666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 20:34:10.840709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:10.842596       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 20:34:10.842659       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:10.858553       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 20:34:10.858642       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 20:34:11.043914       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:11.351920       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 20:34:11.356144       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 20:34:11.356554       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f045348129186455483905503d377503efdbcdb70c102b147193f54d480f404e] <==
	* I0906 20:34:22.584175       1 serving.go:348] Generated self-signed cert in-memory
	I0906 20:34:25.975687       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 20:34:25.975792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:34:25.982006       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0906 20:34:25.982041       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0906 20:34:25.982219       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 20:34:25.982245       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 20:34:25.982428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0906 20:34:25.982440       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 20:34:25.987966       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 20:34:25.991942       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 20:34:26.082964       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0906 20:34:26.083119       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0906 20:34:26.083257       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.914359    3280 scope.go:117] "RemoveContainer" containerID="b79701e0a8b68a0a9c7b6ea7ce38f36170325a5fec9974445392d0b15223c558"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.915861    3280 scope.go:117] "RemoveContainer" containerID="34b113d8d281b885d9a60ace38fb4340fc6129329280180e1664a1e58e970087"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.916386    3280 scope.go:117] "RemoveContainer" containerID="025ca323c389764ffbfd1d756b33ee3e2204cbd949dff299577f6e19ec70da59"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: I0906 20:34:16.955358    3280 kubelet_node_status.go:70] "Attempting to register node" node="pause-056574"
	Sep 06 20:34:16 pause-056574 kubelet[3280]: E0906 20:34:16.955849    3280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="pause-056574"
	Sep 06 20:34:17 pause-056574 kubelet[3280]: W0906 20:34:17.086596    3280 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Sep 06 20:34:17 pause-056574 kubelet[3280]: E0906 20:34:17.086674    3280 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Sep 06 20:34:17 pause-056574 kubelet[3280]: I0906 20:34:17.757331    3280 kubelet_node_status.go:70] "Attempting to register node" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.665634    3280 kubelet_node_status.go:108] "Node was previously registered" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.665953    3280 kubelet_node_status.go:73] "Successfully registered node" node="pause-056574"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.673284    3280 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 20:34:25 pause-056574 kubelet[3280]: I0906 20:34:25.680259    3280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.215844    3280 apiserver.go:52] "Watching apiserver"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.226030    3280 topology_manager.go:215] "Topology Admit Handler" podUID="e90346fb-20dd-4265-8d3b-8f0a270025ce" podNamespace="kube-system" podName="kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.228615    3280 topology_manager.go:215] "Topology Admit Handler" podUID="2f662ac9-4819-4de1-a149-1427c9be35f4" podNamespace="kube-system" podName="kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.228717    3280 topology_manager.go:215] "Topology Admit Handler" podUID="d2358999-88bf-4ed4-b2ca-c2fb70773e36" podNamespace="kube-system" podName="coredns-5dd5756b68-5tvwb"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.247609    3280 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308413    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f662ac9-4819-4de1-a149-1427c9be35f4-lib-modules\") pod \"kube-proxy-mhjb5\" (UID: \"2f662ac9-4819-4de1-a149-1427c9be35f4\") " pod="kube-system/kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308496    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-lib-modules\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308526    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-cni-cfg\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308584    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90346fb-20dd-4265-8d3b-8f0a270025ce-xtables-lock\") pod \"kindnet-rw8hd\" (UID: \"e90346fb-20dd-4265-8d3b-8f0a270025ce\") " pod="kube-system/kindnet-rw8hd"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.308653    3280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f662ac9-4819-4de1-a149-1427c9be35f4-xtables-lock\") pod \"kube-proxy-mhjb5\" (UID: \"2f662ac9-4819-4de1-a149-1427c9be35f4\") " pod="kube-system/kube-proxy-mhjb5"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.534476    3280 scope.go:117] "RemoveContainer" containerID="bb742a60f04ade79d3b6d8e52d3f63ca2c821b205aceb0ec66cc5f31197be6bc"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.534857    3280 scope.go:117] "RemoveContainer" containerID="05f54a6d8be033bd7c29148b0df899659832d6baf55266ef5cd91ae6387cf6e1"
	Sep 06 20:34:26 pause-056574 kubelet[3280]: I0906 20:34:26.539960    3280 scope.go:117] "RemoveContainer" containerID="e2eb1c64ed3cdfabc1a99498e56f978b1d13387b663c261485559c5bf1f864e8"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-056574 -n pause-056574
helpers_test.go:261: (dbg) Run:  kubectl --context pause-056574 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (73.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.1787105702.exe start -p stopped-upgrade-877553 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.1787105702.exe start -p stopped-upgrade-877553 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m36.717101838s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.1787105702.exe -p stopped-upgrade-877553 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.1787105702.exe -p stopped-upgrade-877553 stop: (1.967633797s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-877553 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0906 20:37:37.102362  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-877553 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.844950834s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-877553] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-877553 in cluster stopped-upgrade-877553
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-877553" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:37:32.289321  781551 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:37:32.289484  781551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:37:32.289493  781551 out.go:309] Setting ErrFile to fd 2...
	I0906 20:37:32.289498  781551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:37:32.289746  781551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:37:32.290140  781551 out.go:303] Setting JSON to false
	I0906 20:37:32.291221  781551 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11807,"bootTime":1694020846,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:37:32.291288  781551 start.go:138] virtualization:  
	I0906 20:37:32.294256  781551 out.go:177] * [stopped-upgrade-877553] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:37:32.296769  781551 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:37:32.298809  781551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:37:32.296924  781551 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0906 20:37:32.296990  781551 notify.go:220] Checking for updates...
	I0906 20:37:32.302603  781551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:37:32.304754  781551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:37:32.306885  781551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:37:32.309087  781551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:37:32.311412  781551 config.go:182] Loaded profile config "stopped-upgrade-877553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:37:32.313851  781551 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0906 20:37:32.315933  781551 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:37:32.341885  781551 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:37:32.341983  781551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:37:32.435261  781551 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:37:32.424705425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:37:32.435375  781551 docker.go:294] overlay module found
	I0906 20:37:32.438371  781551 out.go:177] * Using the docker driver based on existing profile
	I0906 20:37:32.440444  781551 start.go:298] selected driver: docker
	I0906 20:37:32.440470  781551 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-877553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-877553 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.242 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:37:32.440581  781551 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:37:32.441188  781551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:37:32.508286  781551 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:37:32.498725471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:37:32.508583  781551 cni.go:84] Creating CNI manager for ""
	I0906 20:37:32.508619  781551 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 20:37:32.508634  781551 start_flags.go:321] config:
	{Name:stopped-upgrade-877553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-877553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.242 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0906 20:37:32.510936  781551 out.go:177] * Starting control plane node stopped-upgrade-877553 in cluster stopped-upgrade-877553
	I0906 20:37:32.512773  781551 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 20:37:32.514607  781551 out.go:177] * Pulling base image ...
	I0906 20:37:32.516320  781551 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0906 20:37:32.516398  781551 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0906 20:37:32.534817  781551 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0906 20:37:32.534842  781551 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0906 20:37:32.616521  781551 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0906 20:37:32.616681  781551 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/stopped-upgrade-877553/config.json ...
	I0906 20:37:32.616945  781551 cache.go:195] Successfully downloaded all kic artifacts
	I0906 20:37:32.616978  781551 start.go:365] acquiring machines lock for stopped-upgrade-877553: {Name:mkcdd8f29aa1879b64857ea952dcc99a0cf3d9b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617036  781551 start.go:369] acquired machines lock for "stopped-upgrade-877553" in 35.84µs
	I0906 20:37:32.617053  781551 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:37:32.617061  781551 fix.go:54] fixHost starting: 
	I0906 20:37:32.617322  781551 cli_runner.go:164] Run: docker container inspect stopped-upgrade-877553 --format={{.State.Status}}
	I0906 20:37:32.617547  781551 cache.go:107] acquiring lock: {Name:mk761ea5917e65ea5320237ae9d3fd919647d74d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617617  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 20:37:32.617625  781551 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.753µs
	I0906 20:37:32.617650  781551 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 20:37:32.617657  781551 cache.go:107] acquiring lock: {Name:mk1a4e838c2ad274a72380629743f1b35f47dd39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617690  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0906 20:37:32.617695  781551 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 39.68µs
	I0906 20:37:32.617701  781551 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0906 20:37:32.617708  781551 cache.go:107] acquiring lock: {Name:mkc27320f8e3da16932e91e3f74bf5d5b33dc664 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617734  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0906 20:37:32.617739  781551 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.918µs
	I0906 20:37:32.617746  781551 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0906 20:37:32.617753  781551 cache.go:107] acquiring lock: {Name:mk53179198066eaf3115f5ed6bbe3ab3db1522c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617780  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0906 20:37:32.617784  781551 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.451µs
	I0906 20:37:32.617791  781551 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0906 20:37:32.617798  781551 cache.go:107] acquiring lock: {Name:mk6a4b577aeafaa6ec13d04d8bb7a342c256843b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617845  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0906 20:37:32.617851  781551 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 52.283µs
	I0906 20:37:32.617858  781551 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0906 20:37:32.617864  781551 cache.go:107] acquiring lock: {Name:mk22f096c6a91c8e67a172b4be8ed0577944fdba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617891  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0906 20:37:32.617895  781551 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 32.517µs
	I0906 20:37:32.617903  781551 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0906 20:37:32.617909  781551 cache.go:107] acquiring lock: {Name:mk627e07c0eeaa37b5facf9ad8431a66a5f5c500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617934  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0906 20:37:32.617940  781551 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 30.581µs
	I0906 20:37:32.617946  781551 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0906 20:37:32.617952  781551 cache.go:107] acquiring lock: {Name:mk9a640a08153bc795cd4dd4cfaabc34e6d59789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:37:32.617982  781551 cache.go:115] /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0906 20:37:32.617986  781551 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 35.233µs
	I0906 20:37:32.617992  781551 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0906 20:37:32.617997  781551 cache.go:87] Successfully saved all images to host disk.
	I0906 20:37:32.635402  781551 fix.go:102] recreateIfNeeded on stopped-upgrade-877553: state=Stopped err=<nil>
	W0906 20:37:32.635442  781551 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 20:37:32.637552  781551 out.go:177] * Restarting existing docker container for "stopped-upgrade-877553" ...
	I0906 20:37:32.639475  781551 cli_runner.go:164] Run: docker start stopped-upgrade-877553
	I0906 20:37:32.968488  781551 cli_runner.go:164] Run: docker container inspect stopped-upgrade-877553 --format={{.State.Status}}
	I0906 20:37:33.006632  781551 kic.go:426] container "stopped-upgrade-877553" state is running.
	I0906 20:37:33.007130  781551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-877553
	I0906 20:37:33.042145  781551 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/stopped-upgrade-877553/config.json ...
	I0906 20:37:33.042399  781551 machine.go:88] provisioning docker machine ...
	I0906 20:37:33.042439  781551 ubuntu.go:169] provisioning hostname "stopped-upgrade-877553"
	I0906 20:37:33.042498  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:33.065765  781551 main.go:141] libmachine: Using SSH client type: native
	I0906 20:37:33.066605  781551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0906 20:37:33.066635  781551 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-877553 && echo "stopped-upgrade-877553" | sudo tee /etc/hostname
	I0906 20:37:33.067512  781551 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0906 20:37:36.226183  781551 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-877553
	
	I0906 20:37:36.226306  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:36.244793  781551 main.go:141] libmachine: Using SSH client type: native
	I0906 20:37:36.245250  781551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0906 20:37:36.245273  781551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-877553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-877553/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-877553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:37:36.387365  781551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:37:36.387391  781551 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17116-652515/.minikube CaCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17116-652515/.minikube}
	I0906 20:37:36.387473  781551 ubuntu.go:177] setting up certificates
	I0906 20:37:36.387483  781551 provision.go:83] configureAuth start
	I0906 20:37:36.387553  781551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-877553
	I0906 20:37:36.407392  781551 provision.go:138] copyHostCerts
	I0906 20:37:36.407463  781551 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem, removing ...
	I0906 20:37:36.407475  781551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem
	I0906 20:37:36.407558  781551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/cert.pem (1123 bytes)
	I0906 20:37:36.407658  781551 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem, removing ...
	I0906 20:37:36.407670  781551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem
	I0906 20:37:36.407703  781551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/key.pem (1679 bytes)
	I0906 20:37:36.407765  781551 exec_runner.go:144] found /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem, removing ...
	I0906 20:37:36.407769  781551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem
	I0906 20:37:36.407795  781551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17116-652515/.minikube/ca.pem (1082 bytes)
	I0906 20:37:36.407838  781551 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-877553 san=[192.168.59.242 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-877553]
	I0906 20:37:37.089804  781551 provision.go:172] copyRemoteCerts
	I0906 20:37:37.089881  781551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:37:37.089923  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:37.108114  781551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/stopped-upgrade-877553/id_rsa Username:docker}
	I0906 20:37:37.207949  781551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 20:37:37.238259  781551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 20:37:37.265632  781551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:37:37.291235  781551 provision.go:86] duration metric: configureAuth took 903.738354ms
	I0906 20:37:37.292769  781551 ubuntu.go:193] setting minikube options for container-runtime
	I0906 20:37:37.292975  781551 config.go:182] Loaded profile config "stopped-upgrade-877553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0906 20:37:37.293118  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:37.313664  781551 main.go:141] libmachine: Using SSH client type: native
	I0906 20:37:37.314148  781551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0740] 0x3a30d0 <nil>  [] 0s} 127.0.0.1 33604 <nil> <nil>}
	I0906 20:37:37.314172  781551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:37:37.823294  781551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:37:37.823320  781551 machine.go:91] provisioned docker machine in 4.780911474s
	I0906 20:37:37.823331  781551 start.go:300] post-start starting for "stopped-upgrade-877553" (driver="docker")
	I0906 20:37:37.823342  781551 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:37:37.823411  781551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:37:37.823470  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:37.853026  781551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/stopped-upgrade-877553/id_rsa Username:docker}
	I0906 20:37:37.960109  781551 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:37:37.964562  781551 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 20:37:37.964592  781551 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 20:37:37.964607  781551 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 20:37:37.964616  781551 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0906 20:37:37.964626  781551 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/addons for local assets ...
	I0906 20:37:37.964692  781551 filesync.go:126] Scanning /home/jenkins/minikube-integration/17116-652515/.minikube/files for local assets ...
	I0906 20:37:37.964780  781551 filesync.go:149] local asset: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem -> 6579002.pem in /etc/ssl/certs
	I0906 20:37:37.964884  781551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:37:37.975833  781551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/ssl/certs/6579002.pem --> /etc/ssl/certs/6579002.pem (1708 bytes)
	I0906 20:37:37.999822  781551 start.go:303] post-start completed in 176.436196ms
	I0906 20:37:37.999924  781551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:37:37.999977  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:38.022434  781551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/stopped-upgrade-877553/id_rsa Username:docker}
	I0906 20:37:38.126421  781551 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 20:37:38.133114  781551 fix.go:56] fixHost completed within 5.516044349s
	I0906 20:37:38.133137  781551 start.go:83] releasing machines lock for "stopped-upgrade-877553", held for 5.516089748s
	I0906 20:37:38.133207  781551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-877553
	I0906 20:37:38.152272  781551 ssh_runner.go:195] Run: cat /version.json
	I0906 20:37:38.152327  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:38.152568  781551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:37:38.152623  781551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-877553
	I0906 20:37:38.177464  781551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/stopped-upgrade-877553/id_rsa Username:docker}
	I0906 20:37:38.201729  781551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33604 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/stopped-upgrade-877553/id_rsa Username:docker}
	W0906 20:37:38.282692  781551 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 20:37:38.282832  781551 ssh_runner.go:195] Run: systemctl --version
	I0906 20:37:38.417364  781551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:37:38.518544  781551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 20:37:38.524299  781551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:37:38.547363  781551 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0906 20:37:38.547478  781551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:37:38.587674  781551 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:37:38.587697  781551 start.go:466] detecting cgroup driver to use...
	I0906 20:37:38.587730  781551 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0906 20:37:38.587799  781551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:37:38.616370  781551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:37:38.628746  781551 docker.go:196] disabling cri-docker service (if available) ...
	I0906 20:37:38.628811  781551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:37:38.641166  781551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:37:38.652887  781551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0906 20:37:38.666296  781551 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0906 20:37:38.666374  781551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:37:38.777116  781551 docker.go:212] disabling docker service ...
	I0906 20:37:38.777181  781551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:37:38.790267  781551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:37:38.802838  781551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:37:38.907369  781551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:37:39.022970  781551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:37:39.037188  781551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:37:39.056968  781551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:37:39.057051  781551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:37:39.072652  781551 out.go:177] 
	W0906 20:37:39.074534  781551 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0906 20:37:39.074569  781551 out.go:239] * 
	* 
	W0906 20:37:39.075641  781551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:37:39.077908  781551 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-877553 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (105.53s)

                                                
                                    

Test pass (262/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.29
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.13
10 TestDownloadOnly/v1.28.1/json-events 10.03
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.61
22 TestAddons/Setup 167.27
24 TestAddons/parallel/Registry 19.97
26 TestAddons/parallel/InspektorGadget 10.84
27 TestAddons/parallel/MetricsServer 6.16
30 TestAddons/parallel/CSI 50.87
31 TestAddons/parallel/Headlamp 17.73
32 TestAddons/parallel/CloudSpanner 5.74
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.29
37 TestCertOptions 42.68
38 TestCertExpiration 251.26
40 TestForceSystemdFlag 34.51
41 TestForceSystemdEnv 39.86
47 TestErrorSpam/setup 33.09
48 TestErrorSpam/start 0.87
49 TestErrorSpam/status 1.11
50 TestErrorSpam/pause 1.88
51 TestErrorSpam/unpause 2.06
52 TestErrorSpam/stop 1.46
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 76.23
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 41.41
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.11
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.24
64 TestFunctional/serial/CacheCmd/cache/add_local 1.29
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
69 TestFunctional/serial/CacheCmd/cache/delete 0.11
70 TestFunctional/serial/MinikubeKubectlCmd 0.14
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
72 TestFunctional/serial/ExtraConfig 34.38
73 TestFunctional/serial/ComponentHealth 0.11
74 TestFunctional/serial/LogsCmd 1.88
75 TestFunctional/serial/LogsFileCmd 1.91
76 TestFunctional/serial/InvalidService 4.45
78 TestFunctional/parallel/ConfigCmd 0.5
79 TestFunctional/parallel/DashboardCmd 8.39
80 TestFunctional/parallel/DryRun 1.19
81 TestFunctional/parallel/InternationalLanguage 0.32
82 TestFunctional/parallel/StatusCmd 1.15
86 TestFunctional/parallel/ServiceCmdConnect 8.66
87 TestFunctional/parallel/AddonsCmd 0.17
88 TestFunctional/parallel/PersistentVolumeClaim 27.38
90 TestFunctional/parallel/SSHCmd 0.87
91 TestFunctional/parallel/CpCmd 1.54
93 TestFunctional/parallel/FileSync 0.41
94 TestFunctional/parallel/CertSync 2.09
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
102 TestFunctional/parallel/License 0.34
103 TestFunctional/parallel/Version/short 0.07
104 TestFunctional/parallel/Version/components 0.93
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
109 TestFunctional/parallel/ImageCommands/ImageBuild 5.11
110 TestFunctional/parallel/ImageCommands/Setup 2.66
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.89
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.42
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.33
118 TestFunctional/parallel/ServiceCmd/List 0.44
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
121 TestFunctional/parallel/ServiceCmd/Format 0.51
122 TestFunctional/parallel/ServiceCmd/URL 0.51
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.51
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.42
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
139 TestFunctional/parallel/ProfileCmd/profile_list 0.4
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
141 TestFunctional/parallel/MountCmd/any-port 8.59
142 TestFunctional/parallel/MountCmd/specific-port 1.88
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
144 TestFunctional/delete_addon-resizer_images 0.1
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.03
150 TestIngressAddonLegacy/StartLegacyK8sCluster 97.58
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
157 TestJSONOutput/start/Command 76.77
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.85
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.78
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.91
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.24
182 TestKicCustomNetwork/create_custom_network 44.31
183 TestKicCustomNetwork/use_default_bridge_network 34.08
184 TestKicExistingNetwork 34.06
185 TestKicCustomSubnet 34.49
186 TestKicStaticIP 39.02
187 TestMainNoArgs 0.06
188 TestMinikubeProfile 76.81
191 TestMountStart/serial/StartWithMountFirst 7.15
192 TestMountStart/serial/VerifyMountFirst 0.3
193 TestMountStart/serial/StartWithMountSecond 7.04
194 TestMountStart/serial/VerifyMountSecond 0.28
195 TestMountStart/serial/DeleteFirst 1.7
196 TestMountStart/serial/VerifyMountPostDelete 0.28
197 TestMountStart/serial/Stop 1.23
198 TestMountStart/serial/RestartStopped 8.14
199 TestMountStart/serial/VerifyMountPostStop 0.29
202 TestMultiNode/serial/FreshStart2Nodes 99.42
203 TestMultiNode/serial/DeployApp2Nodes 5.61
205 TestMultiNode/serial/AddNode 23.65
206 TestMultiNode/serial/ProfileList 0.35
207 TestMultiNode/serial/CopyFile 11.32
208 TestMultiNode/serial/StopNode 2.42
209 TestMultiNode/serial/StartAfterStop 12.87
210 TestMultiNode/serial/RestartKeepsNodes 123.27
211 TestMultiNode/serial/DeleteNode 5.13
212 TestMultiNode/serial/StopMultiNode 24.07
213 TestMultiNode/serial/RestartMultiNode 81.27
214 TestMultiNode/serial/ValidateNameConflict 33.75
219 TestPreload 173.9
221 TestScheduledStopUnix 110.69
224 TestInsufficientStorage 11.3
227 TestKubernetesUpgrade 398.69
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
232 TestPause/serial/Start 90.92
233 TestNoKubernetes/serial/StartWithK8s 46.34
234 TestNoKubernetes/serial/StartWithStopK8s 23.67
235 TestNoKubernetes/serial/Start 6.98
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
237 TestNoKubernetes/serial/ProfileList 1
238 TestNoKubernetes/serial/Stop 1.24
239 TestNoKubernetes/serial/StartNoArgs 8.14
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
242 TestStoppedBinaryUpgrade/Setup 1.09
244 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
259 TestNetworkPlugins/group/false 4.28
264 TestStartStop/group/old-k8s-version/serial/FirstStart 129.01
265 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
266 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
267 TestStartStop/group/old-k8s-version/serial/Stop 12.2
268 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
269 TestStartStop/group/old-k8s-version/serial/SecondStart 429.52
271 TestStartStop/group/no-preload/serial/FirstStart 61.04
272 TestStartStop/group/no-preload/serial/DeployApp 9.51
273 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
274 TestStartStop/group/no-preload/serial/Stop 12.2
275 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
276 TestStartStop/group/no-preload/serial/SecondStart 346.76
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
280 TestStartStop/group/old-k8s-version/serial/Pause 3.47
282 TestStartStop/group/embed-certs/serial/FirstStart 81.22
283 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.04
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.23
285 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
286 TestStartStop/group/no-preload/serial/Pause 3.74
287 TestStartStop/group/embed-certs/serial/DeployApp 9.8
289 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.22
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
291 TestStartStop/group/embed-certs/serial/Stop 12.15
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
293 TestStartStop/group/embed-certs/serial/SecondStart 354.91
294 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.52
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 359.12
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
302 TestStartStop/group/embed-certs/serial/Pause 3.46
304 TestStartStop/group/newest-cni/serial/FirstStart 48.52
305 TestStartStop/group/newest-cni/serial/DeployApp 0
306 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.5
307 TestStartStop/group/newest-cni/serial/Stop 1.33
308 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
309 TestStartStop/group/newest-cni/serial/SecondStart 35.28
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.04
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.19
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
315 TestStartStop/group/newest-cni/serial/Pause 3.61
316 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.55
317 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.97
318 TestNetworkPlugins/group/auto/Start 85.95
319 TestNetworkPlugins/group/kindnet/Start 56.29
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
322 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
323 TestNetworkPlugins/group/kindnet/DNS 0.23
324 TestNetworkPlugins/group/kindnet/Localhost 0.2
325 TestNetworkPlugins/group/kindnet/HairPin 0.21
326 TestNetworkPlugins/group/auto/KubeletFlags 0.47
327 TestNetworkPlugins/group/auto/NetCatPod 12.52
328 TestNetworkPlugins/group/auto/DNS 0.32
329 TestNetworkPlugins/group/auto/Localhost 0.31
330 TestNetworkPlugins/group/auto/HairPin 0.24
331 TestNetworkPlugins/group/calico/Start 78.19
332 TestNetworkPlugins/group/custom-flannel/Start 74.02
333 TestNetworkPlugins/group/calico/ControllerPod 5.09
334 TestNetworkPlugins/group/calico/KubeletFlags 0.34
335 TestNetworkPlugins/group/calico/NetCatPod 11.4
336 TestNetworkPlugins/group/calico/DNS 0.23
337 TestNetworkPlugins/group/calico/Localhost 0.23
338 TestNetworkPlugins/group/calico/HairPin 0.23
339 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.48
340 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.42
341 TestNetworkPlugins/group/custom-flannel/DNS 0.37
342 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
343 TestNetworkPlugins/group/custom-flannel/HairPin 0.29
344 TestNetworkPlugins/group/enable-default-cni/Start 84.5
345 TestNetworkPlugins/group/flannel/Start 71.97
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.39
348 TestNetworkPlugins/group/flannel/ControllerPod 5.03
349 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
350 TestNetworkPlugins/group/flannel/NetCatPod 11.33
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
354 TestNetworkPlugins/group/flannel/DNS 0.37
355 TestNetworkPlugins/group/flannel/Localhost 0.28
356 TestNetworkPlugins/group/flannel/HairPin 0.21
357 TestNetworkPlugins/group/bridge/Start 47.28
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
359 TestNetworkPlugins/group/bridge/NetCatPod 11.34
360 TestNetworkPlugins/group/bridge/DNS 26.93
361 TestNetworkPlugins/group/bridge/Localhost 0.18
362 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (15.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-363440 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-363440 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.290983568s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-363440
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-363440: exit status 85 (133.67758ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-363440 | jenkins | v1.31.2 | 06 Sep 23 19:56 UTC |          |
	|         | -p download-only-363440        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 19:56:37
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:56:37.952162  657905 out.go:296] Setting OutFile to fd 1 ...
	I0906 19:56:37.952302  657905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:37.952310  657905 out.go:309] Setting ErrFile to fd 2...
	I0906 19:56:37.952315  657905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:37.952564  657905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	W0906 19:56:37.952697  657905 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17116-652515/.minikube/config/config.json: open /home/jenkins/minikube-integration/17116-652515/.minikube/config/config.json: no such file or directory
	I0906 19:56:37.953135  657905 out.go:303] Setting JSON to true
	I0906 19:56:37.954204  657905 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9352,"bootTime":1694020846,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 19:56:37.954278  657905 start.go:138] virtualization:  
	I0906 19:56:37.957319  657905 out.go:97] [download-only-363440] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 19:56:37.959032  657905 out.go:169] MINIKUBE_LOCATION=17116
	W0906 19:56:37.957516  657905 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 19:56:37.957605  657905 notify.go:220] Checking for updates...
	I0906 19:56:37.963382  657905 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:56:37.965384  657905 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 19:56:37.967163  657905 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 19:56:37.969013  657905 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 19:56:37.972477  657905 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 19:56:37.972721  657905 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 19:56:37.997724  657905 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 19:56:37.997808  657905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:56:38.101175  657905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-09-06 19:56:38.091143238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:56:38.101280  657905 docker.go:294] overlay module found
	I0906 19:56:38.103216  657905 out.go:97] Using the docker driver based on user configuration
	I0906 19:56:38.103247  657905 start.go:298] selected driver: docker
	I0906 19:56:38.103254  657905 start.go:902] validating driver "docker" against <nil>
	I0906 19:56:38.103351  657905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:56:38.171172  657905 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-09-06 19:56:38.1618398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:56:38.171332  657905 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 19:56:38.171612  657905 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0906 19:56:38.171824  657905 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 19:56:38.174286  657905 out.go:169] Using Docker driver with root privileges
	I0906 19:56:38.176060  657905 cni.go:84] Creating CNI manager for ""
	I0906 19:56:38.176075  657905 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 19:56:38.176084  657905 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 19:56:38.176098  657905 start_flags.go:321] config:
	{Name:download-only-363440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-363440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 19:56:38.178305  657905 out.go:97] Starting control plane node download-only-363440 in cluster download-only-363440
	I0906 19:56:38.178327  657905 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 19:56:38.181162  657905 out.go:97] Pulling base image ...
	I0906 19:56:38.181193  657905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 19:56:38.181244  657905 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 19:56:38.198585  657905 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0906 19:56:38.199183  657905 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0906 19:56:38.199312  657905 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0906 19:56:38.247702  657905 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0906 19:56:38.247727  657905 cache.go:57] Caching tarball of preloaded images
	I0906 19:56:38.248220  657905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 19:56:38.250380  657905 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 19:56:38.250403  657905 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0906 19:56:38.366771  657905 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0906 19:56:43.313574  657905 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0906 19:56:51.671887  657905 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0906 19:56:51.671986  657905 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0906 19:56:52.614232  657905 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0906 19:56:52.614584  657905 profile.go:148] Saving config to /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/download-only-363440/config.json ...
	I0906 19:56:52.614625  657905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/download-only-363440/config.json: {Name:mk547eefdbe3442c372de1460c852ce0974cc493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:56:52.615209  657905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0906 19:56:52.615919  657905 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-363440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (10.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-363440 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-363440 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.029047398s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (10.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-363440
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-363440: exit status 85 (78.807532ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-363440 | jenkins | v1.31.2 | 06 Sep 23 19:56 UTC |          |
	|         | -p download-only-363440        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-363440 | jenkins | v1.31.2 | 06 Sep 23 19:56 UTC |          |
	|         | -p download-only-363440        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 19:56:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:56:53.390863  657978 out.go:296] Setting OutFile to fd 1 ...
	I0906 19:56:53.391103  657978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:53.391131  657978 out.go:309] Setting ErrFile to fd 2...
	I0906 19:56:53.391152  657978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:53.391424  657978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	W0906 19:56:53.391565  657978 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17116-652515/.minikube/config/config.json: open /home/jenkins/minikube-integration/17116-652515/.minikube/config/config.json: no such file or directory
	I0906 19:56:53.391813  657978 out.go:303] Setting JSON to true
	I0906 19:56:53.392860  657978 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9368,"bootTime":1694020846,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 19:56:53.392953  657978 start.go:138] virtualization:  
	I0906 19:56:53.431547  657978 out.go:97] [download-only-363440] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 19:56:53.431898  657978 notify.go:220] Checking for updates...
	I0906 19:56:53.464379  657978 out.go:169] MINIKUBE_LOCATION=17116
	I0906 19:56:53.495824  657978 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:56:53.527770  657978 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 19:56:53.574903  657978 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 19:56:53.607994  657978 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 19:56:53.640150  657978 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 19:56:53.640700  657978 config.go:182] Loaded profile config "download-only-363440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0906 19:56:53.640752  657978 start.go:810] api.Load failed for download-only-363440: filestore "download-only-363440": Docker machine "download-only-363440" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 19:56:53.640861  657978 driver.go:373] Setting default libvirt URI to qemu:///system
	W0906 19:56:53.640888  657978 start.go:810] api.Load failed for download-only-363440: filestore "download-only-363440": Docker machine "download-only-363440" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 19:56:53.664876  657978 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 19:56:53.664965  657978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:56:53.739632  657978 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-06 19:56:53.729652466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:56:53.739738  657978 docker.go:294] overlay module found
	I0906 19:56:53.767958  657978 out.go:97] Using the docker driver based on existing profile
	I0906 19:56:53.767995  657978 start.go:298] selected driver: docker
	I0906 19:56:53.768002  657978 start.go:902] validating driver "docker" against &{Name:download-only-363440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-363440 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 19:56:53.768193  657978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:56:53.840521  657978 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-09-06 19:56:53.830876321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 19:56:53.840955  657978 cni.go:84] Creating CNI manager for ""
	I0906 19:56:53.840973  657978 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0906 19:56:53.840985  657978 start_flags.go:321] config:
	{Name:download-only-363440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-363440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 19:56:53.864748  657978 out.go:97] Starting control plane node download-only-363440 in cluster download-only-363440
	I0906 19:56:53.864793  657978 cache.go:122] Beginning downloading kic base image for docker with crio
	I0906 19:56:53.895969  657978 out.go:97] Pulling base image ...
	I0906 19:56:53.896041  657978 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 19:56:53.896110  657978 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0906 19:56:53.913055  657978 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0906 19:56:53.913159  657978 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0906 19:56:53.913176  657978 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory, skipping pull
	I0906 19:56:53.913181  657978 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in cache, skipping pull
	I0906 19:56:53.913188  657978 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0906 19:56:53.971196  657978 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0906 19:56:53.971223  657978 cache.go:57] Caching tarball of preloaded images
	I0906 19:56:53.975754  657978 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0906 19:56:54.010302  657978 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0906 19:56:54.010359  657978 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 ...
	I0906 19:56:54.125804  657978 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:44f3d096b9be2c2ed42e6b0d364bc859 -> /home/jenkins/minikube-integration/17116-652515/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-363440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-363440
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-500664 --alsologtostderr --binary-mirror http://127.0.0.1:44129 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-500664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-500664
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/Setup (167.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-342654 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-342654 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m47.273816363s)
--- PASS: TestAddons/Setup (167.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 51.540172ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xx7n9" [d9bc8b32-e703-4760-8ebf-167a7f52b2fa] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017254674s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v6f29" [1d37360f-66ee-42d0-a616-1b35af8ddf7a] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017846507s
addons_test.go:316: (dbg) Run:  kubectl --context addons-342654 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-342654 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-342654 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.825203541s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 ip
2023/09/06 20:00:11 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ndl2x" [fd09f40a-9b33-4933-b8ce-149ed5f6a94a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017983514s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-342654
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-342654: (5.825095084s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.175379ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-v8gmm" [061eb88b-0263-4464-a3c9-c628c00cc1ab] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013555312s
addons_test.go:391: (dbg) Run:  kubectl --context addons-342654 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p addons-342654 addons disable metrics-server --alsologtostderr -v=1: (1.017201066s)
--- PASS: TestAddons/parallel/MetricsServer (6.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 12.921245ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-342654 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-342654 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [23f43773-a8ff-419f-8880-23215174418b] Pending
helpers_test.go:344: "task-pv-pod" [23f43773-a8ff-419f-8880-23215174418b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [23f43773-a8ff-419f-8880-23215174418b] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.036618555s
addons_test.go:560: (dbg) Run:  kubectl --context addons-342654 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342654 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-342654 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-342654 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-342654 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-342654 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-342654 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-342654 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0ddf9cb9-250c-4400-b3c3-7f6d5d8e55d1] Pending
helpers_test.go:344: "task-pv-pod-restore" [0ddf9cb9-250c-4400-b3c3-7f6d5d8e55d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0ddf9cb9-250c-4400-b3c3-7f6d5d8e55d1] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.014663741s
addons_test.go:602: (dbg) Run:  kubectl --context addons-342654 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-342654 delete pod task-pv-pod-restore: (1.125565862s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-342654 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-342654 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-342654 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.833864277s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-342654 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-342654 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-342654 --alsologtostderr -v=1: (1.696699468s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-cfqpg" [88bc98ff-6716-4242-b0be-ace28b039c09] Pending
helpers_test.go:344: "headlamp-699c48fb74-cfqpg" [88bc98ff-6716-4242-b0be-ace28b039c09] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-cfqpg" [88bc98ff-6716-4242-b0be-ace28b039c09] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.035529352s
--- PASS: TestAddons/parallel/Headlamp (17.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-p87kg" [f8aae2f1-a5ae-4d95-a8a5-a0dd192c87a4] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013389695s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-342654
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-342654 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-342654 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-342654
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-342654: (12.012556729s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-342654
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-342654
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-342654
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestCertOptions (42.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-233226 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0906 20:40:28.132444  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-233226 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.927933949s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-233226 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-233226 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-233226 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-233226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-233226
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-233226: (2.045460666s)
--- PASS: TestCertOptions (42.68s)

                                                
                                    
x
+
TestCertExpiration (251.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-608991 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-608991 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.225578729s)
E0906 20:42:37.101740  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-608991 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0906 20:44:52.438401  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-608991 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.935463217s)
helpers_test.go:175: Cleaning up "cert-expiration-608991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-608991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-608991: (2.083267186s)
--- PASS: TestCertExpiration (251.26s)

                                                
                                    
x
+
TestForceSystemdFlag (34.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-282351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-282351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.777213133s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-282351 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-282351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-282351
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-282351: (2.426624126s)
--- PASS: TestForceSystemdFlag (34.51s)

                                                
                                    
x
+
TestForceSystemdEnv (39.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-240159 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0906 20:39:52.440087  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-240159 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.180524969s)
helpers_test.go:175: Cleaning up "force-systemd-env-240159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-240159
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-240159: (2.679175627s)
--- PASS: TestForceSystemdEnv (39.86s)

                                                
                                    
x
+
TestErrorSpam/setup (33.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-789552 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789552 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-789552 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789552 --driver=docker  --container-runtime=crio: (33.090508297s)
--- PASS: TestErrorSpam/setup (33.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 unpause
--- PASS: TestErrorSpam/unpause (2.06s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 stop: (1.24500778s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-789552 --log_dir /tmp/nospam-789552 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17116-652515/.minikube/files/etc/test/nested/copy/657900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0906 20:04:52.438284  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.445227  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.455480  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.476136  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.517040  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.597333  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:52.757659  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:53.078287  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:53.719191  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:55.007844  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:04:57.568090  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:05:02.688290  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:05:12.928534  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:05:33.409567  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-687153 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.230178748s)
--- PASS: TestFunctional/serial/StartWithProxy (76.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --alsologtostderr -v=8
E0906 20:06:14.369847  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-687153 --alsologtostderr -v=8: (41.411873839s)
functional_test.go:659: soft start took 41.412459454s for "functional-687153" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-687153 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:3.1: (1.30975581s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:3.3: (1.394253375s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 cache add registry.k8s.io/pause:latest: (1.533601988s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-687153 /tmp/TestFunctionalserialCacheCmdcacheadd_local3773918299/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache add minikube-local-cache-test:functional-687153
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache delete minikube-local-cache-test:functional-687153
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-687153
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (321.555071ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 cache reload: (1.178971287s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 kubectl -- --context functional-687153 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-687153 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-687153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.377864476s)
functional_test.go:757: restart took 34.377984723s for "functional-687153" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-687153 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 logs: (1.875448491s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 logs --file /tmp/TestFunctionalserialLogsFileCmd3161915901/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 logs --file /tmp/TestFunctionalserialLogsFileCmd3161915901/001/logs.txt: (1.905937088s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-687153 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-687153
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-687153: exit status 115 (651.989497ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32660 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-687153 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 config get cpus: exit status 14 (100.099192ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 config get cpus: exit status 14 (76.289871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-687153 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-687153 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 684442: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-687153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (419.484197ms)

                                                
                                                
-- stdout --
	* [functional-687153] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:08:24.603298  683636 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:08:24.603484  683636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:24.603492  683636 out.go:309] Setting ErrFile to fd 2...
	I0906 20:08:24.603498  683636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:24.603797  683636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:08:24.604156  683636 out.go:303] Setting JSON to false
	I0906 20:08:24.605097  683636 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10059,"bootTime":1694020846,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:08:24.605172  683636 start.go:138] virtualization:  
	I0906 20:08:24.607999  683636 out.go:177] * [functional-687153] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:08:24.610071  683636 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:08:24.611989  683636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:08:24.610239  683636 notify.go:220] Checking for updates...
	I0906 20:08:24.615652  683636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:08:24.617749  683636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:08:24.619548  683636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:08:24.621792  683636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:08:24.624061  683636 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:08:24.624551  683636 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:08:24.740028  683636 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:08:24.740156  683636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:08:24.946512  683636 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-06 20:08:24.935882197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:08:24.946621  683636 docker.go:294] overlay module found
	I0906 20:08:24.948906  683636 out.go:177] * Using the docker driver based on existing profile
	I0906 20:08:24.950705  683636 start.go:298] selected driver: docker
	I0906 20:08:24.950784  683636 start.go:902] validating driver "docker" against &{Name:functional-687153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-687153 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:08:24.950907  683636 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:08:24.953228  683636 out.go:177] 
	W0906 20:08:24.955073  683636 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 20:08:24.957125  683636 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-687153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-687153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (315.098267ms)

                                                
                                                
-- stdout --
	* [functional-687153] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:08:25.844562  683985 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:08:25.845651  683985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:25.845665  683985 out.go:309] Setting ErrFile to fd 2...
	I0906 20:08:25.845671  683985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:08:25.846056  683985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:08:25.847314  683985 out.go:303] Setting JSON to false
	I0906 20:08:25.848931  683985 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10060,"bootTime":1694020846,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:08:25.849012  683985 start.go:138] virtualization:  
	I0906 20:08:25.852281  683985 out.go:177] * [functional-687153] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0906 20:08:25.854303  683985 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:08:25.856390  683985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:08:25.854411  683985 notify.go:220] Checking for updates...
	I0906 20:08:25.861751  683985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:08:25.864851  683985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:08:25.869961  683985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:08:25.872053  683985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:08:25.874375  683985 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:08:25.875067  683985 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:08:25.944501  683985 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:08:25.944737  683985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:08:26.039900  683985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-06 20:08:26.029008234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:08:26.040012  683985 docker.go:294] overlay module found
	I0906 20:08:26.041945  683985 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0906 20:08:26.043678  683985 start.go:298] selected driver: docker
	I0906 20:08:26.043698  683985 start.go:902] validating driver "docker" against &{Name:functional-687153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-687153 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 20:08:26.043836  683985 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:08:26.046490  683985 out.go:177] 
	W0906 20:08:26.049059  683985 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 20:08:26.050968  683985 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-687153 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-687153 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-62rv2" [0f44c649-685b-40df-9244-01d32f6df0fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-62rv2" [0f44c649-685b-40df-9244-01d32f6df0fa] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.018073624s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31679
functional_test.go:1674: http://192.168.49.2:31679: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-62rv2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31679
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7283c2b1-895e-4a02-a25b-d08349c90f97] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012446524s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-687153 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-687153 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-687153 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-687153 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-687153 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [981599be-a97b-49a8-9574-aa7819bdc30a] Pending
helpers_test.go:344: "sp-pod" [981599be-a97b-49a8-9574-aa7819bdc30a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [981599be-a97b-49a8-9574-aa7819bdc30a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.033077059s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-687153 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-687153 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-687153 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c1ae745d-54a3-4052-a737-bd17872f5c90] Pending
helpers_test.go:344: "sp-pod" [c1ae745d-54a3-4052-a737-bd17872f5c90] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c1ae745d-54a3-4052-a737-bd17872f5c90] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.031590672s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-687153 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh -n functional-687153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 cp functional-687153:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd870895526/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh -n functional-687153 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/657900/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /etc/test/nested/copy/657900/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/657900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /etc/ssl/certs/657900.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/657900.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /usr/share/ca-certificates/657900.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6579002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /etc/ssl/certs/6579002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6579002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /usr/share/ca-certificates/6579002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0906 20:07:36.290511  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-687153 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh "sudo systemctl is-active docker": exit status 1 (396.355894ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh "sudo systemctl is-active containerd": exit status 1 (384.74736ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-687153 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-687153
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-687153 image ls --format short --alsologtostderr:
I0906 20:08:28.471861  684451 out.go:296] Setting OutFile to fd 1 ...
I0906 20:08:28.472152  684451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:28.472180  684451 out.go:309] Setting ErrFile to fd 2...
I0906 20:08:28.472198  684451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:28.472482  684451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
I0906 20:08:28.473200  684451 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:28.473392  684451 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:28.473902  684451 cli_runner.go:164] Run: docker container inspect functional-687153 --format={{.State.Status}}
I0906 20:08:28.511052  684451 ssh_runner.go:195] Run: systemctl --version
I0906 20:08:28.511186  684451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-687153
I0906 20:08:28.535914  684451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/functional-687153/id_rsa Username:docker}
I0906 20:08:28.635823  684451 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-687153 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.28.1            | 812f5241df7fd | 69.9MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-687153  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/my-image                      | functional-687153  | 35a174384aeaa | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b4a5a57e99492 | 59.2MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.1            | b29fb62480892 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 8b6e1980b7584 | 117MB  |
| docker.io/library/nginx                 | alpine             | fa0c6bb795403 | 45.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| docker.io/library/nginx                 | latest             | ab73c7fd67234 | 196MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-687153 image ls --format table --alsologtostderr:
I0906 20:08:34.505446  684865 out.go:296] Setting OutFile to fd 1 ...
I0906 20:08:34.505696  684865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:34.505707  684865 out.go:309] Setting ErrFile to fd 2...
I0906 20:08:34.505713  684865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:34.506014  684865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
I0906 20:08:34.506724  684865 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:34.506860  684865 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:34.507381  684865 cli_runner.go:164] Run: docker container inspect functional-687153 --format={{.State.Status}}
I0906 20:08:34.538165  684865 ssh_runner.go:195] Run: systemctl --version
I0906 20:08:34.538219  684865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-687153
I0906 20:08:34.560564  684865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/functional-687153/id_rsa Username:docker}
I0906 20:08:34.665715  684865 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls --format json --alsologtostderr
2023/09/06 20:08:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-687153 image ls --format json --alsologtostderr:
[{"id":"a80afb4ef4e5cd9ab92da4ea74af2c60370351a34221c439de9b80092252c146","repoDigests":["docker.io/library/1494fbf95172f82b8caa6cb19386d8921e84d1f2c1f3ff7041a27ae402e5525f-tmp@sha256:9335e81260fedf2eb59785b44d533d43b125a7469a6d1a15738c76d85dc61366"],"repoTags":[],"size":"1637644"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-687153"],"size":"34114467"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"120857550"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be7
1965","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"117187378"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45265718"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c","docker.io/library/nginx@sha256:d204087971390839f077afcaa4f5a771c1694610f0f7cb13a2d2a3aa520b053f"],"repoTags":["docker.io/library/nginx:latest"],"size":"19
6196622"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"35a174384aeaac9cb6cd69872185242e8999f83de7f717c7b6e86e4427cf359b","repoDigests":["localhost/my-image@sha256:542238c1af485837997ab380bf3e34f6d28153a7d42de11c10fb1ab253fcf46a"],"repoTags":["localhost/my-image:functional-687153"],"size":"1640226"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"5139
3451"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":["registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c","registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"69926807"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d","registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["re
gistry.k8s.io/kube-scheduler:v1.28.1"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kuberne
tesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f7720
6a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-687153 image ls --format json --alsologtostderr:
I0906 20:08:34.159466  684836 out.go:296] Setting OutFile to fd 1 ...
I0906 20:08:34.159690  684836 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:34.159715  684836 out.go:309] Setting ErrFile to fd 2...
I0906 20:08:34.159735  684836 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:34.160002  684836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
I0906 20:08:34.160630  684836 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:34.160796  684836 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:34.161269  684836 cli_runner.go:164] Run: docker container inspect functional-687153 --format={{.State.Status}}
I0906 20:08:34.180453  684836 ssh_runner.go:195] Run: systemctl --version
I0906 20:08:34.180501  684836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-687153
I0906 20:08:34.210862  684836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/functional-687153/id_rsa Username:docker}
I0906 20:08:34.343802  684836 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-687153 image ls --format yaml --alsologtostderr:
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790
repoTags:
- docker.io/library/nginx:alpine
size: "45265718"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
- registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "69926807"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-687153
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "120857550"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "117187378"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
- docker.io/library/nginx@sha256:d204087971390839f077afcaa4f5a771c1694610f0f7cb13a2d2a3aa520b053f
repoTags:
- docker.io/library/nginx:latest
size: "196196622"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "59188020"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-687153 image ls --format yaml --alsologtostderr:
I0906 20:08:28.750600  684507 out.go:296] Setting OutFile to fd 1 ...
I0906 20:08:28.750744  684507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:28.750750  684507 out.go:309] Setting ErrFile to fd 2...
I0906 20:08:28.750755  684507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:28.751076  684507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
I0906 20:08:28.751678  684507 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:28.751828  684507 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:28.752282  684507 cli_runner.go:164] Run: docker container inspect functional-687153 --format={{.State.Status}}
I0906 20:08:28.774520  684507 ssh_runner.go:195] Run: systemctl --version
I0906 20:08:28.774574  684507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-687153
I0906 20:08:28.793752  684507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/functional-687153/id_rsa Username:docker}
I0906 20:08:28.892763  684507 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh pgrep buildkitd: exit status 1 (354.116723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image build -t localhost/my-image:functional-687153 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image build -t localhost/my-image:functional-687153 testdata/build --alsologtostderr: (4.477396198s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-687153 image build -t localhost/my-image:functional-687153 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a80afb4ef4e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-687153
--> 35a174384ae
Successfully tagged localhost/my-image:functional-687153
35a174384aeaac9cb6cd69872185242e8999f83de7f717c7b6e86e4427cf359b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-687153 image build -t localhost/my-image:functional-687153 testdata/build --alsologtostderr:
I0906 20:08:29.386213  684584 out.go:296] Setting OutFile to fd 1 ...
I0906 20:08:29.386825  684584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:29.386835  684584 out.go:309] Setting ErrFile to fd 2...
I0906 20:08:29.386841  684584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 20:08:29.387171  684584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
I0906 20:08:29.387915  684584 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:29.388848  684584 config.go:182] Loaded profile config "functional-687153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0906 20:08:29.389388  684584 cli_runner.go:164] Run: docker container inspect functional-687153 --format={{.State.Status}}
I0906 20:08:29.430398  684584 ssh_runner.go:195] Run: systemctl --version
I0906 20:08:29.430452  684584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-687153
I0906 20:08:29.455167  684584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/functional-687153/id_rsa Username:docker}
I0906 20:08:29.569361  684584 build_images.go:151] Building image from path: /tmp/build.2510052743.tar
I0906 20:08:29.569436  684584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 20:08:29.581285  684584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2510052743.tar
I0906 20:08:29.586960  684584 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2510052743.tar: stat -c "%s %y" /var/lib/minikube/build/build.2510052743.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2510052743.tar': No such file or directory
I0906 20:08:29.586990  684584 ssh_runner.go:362] scp /tmp/build.2510052743.tar --> /var/lib/minikube/build/build.2510052743.tar (3072 bytes)
I0906 20:08:29.622123  684584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2510052743
I0906 20:08:29.633414  684584 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2510052743 -xf /var/lib/minikube/build/build.2510052743.tar
I0906 20:08:29.645400  684584 crio.go:297] Building image: /var/lib/minikube/build/build.2510052743
I0906 20:08:29.645520  684584 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-687153 /var/lib/minikube/build/build.2510052743 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0906 20:08:33.753671  684584 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-687153 /var/lib/minikube/build/build.2510052743 --cgroup-manager=cgroupfs: (4.108111374s)
I0906 20:08:33.753734  684584 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2510052743
I0906 20:08:33.773858  684584 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2510052743.tar
I0906 20:08:33.789202  684584 build_images.go:207] Built localhost/my-image:functional-687153 from /tmp/build.2510052743.tar
I0906 20:08:33.789233  684584 build_images.go:123] succeeded building to: functional-687153
I0906 20:08:33.789238  684584 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.629322752s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-687153
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr: (5.608052318s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-687153 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-687153 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-9r8jr" [9892923e-839e-49fc-8a11-429b4689ab9a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-9r8jr" [9892923e-839e-49fc-8a11-429b4689ab9a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.044931253s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr: (2.960998842s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.733117718s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-687153
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image load --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr: (4.250667431s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service list -o json
functional_test.go:1493: Took "509.217102ms" to run "out/minikube-linux-arm64 -p functional-687153 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32447
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32447
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 681335: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image save gcr.io/google-containers/addon-resizer:functional-687153 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image save gcr.io/google-containers/addon-resizer:functional-687153 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.070507665s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-687153 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [61b8a171-0cab-4165-9278-06713c73c064] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [61b8a171-0cab-4165-9278-06713c73c064] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.035639226s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image rm gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-687153 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (2.149196889s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-687153
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 image save --daemon gcr.io/google-containers/addon-resizer:functional-687153 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-687153
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-687153 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.27.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-687153 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "343.266187ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "57.774975ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "353.068134ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "57.852654ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdany-port3621555786/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694030894149674199" to /tmp/TestFunctionalparallelMountCmdany-port3621555786/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694030894149674199" to /tmp/TestFunctionalparallelMountCmdany-port3621555786/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694030894149674199" to /tmp/TestFunctionalparallelMountCmdany-port3621555786/001/test-1694030894149674199
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.282379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 20:08 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 20:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 20:08 test-1694030894149674199
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh cat /mount-9p/test-1694030894149674199
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-687153 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [428ab7c7-c9c7-42be-8417-9ab1f2ed8da2] Pending
helpers_test.go:344: "busybox-mount" [428ab7c7-c9c7-42be-8417-9ab1f2ed8da2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [428ab7c7-c9c7-42be-8417-9ab1f2ed8da2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [428ab7c7-c9c7-42be-8417-9ab1f2ed8da2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.020026285s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-687153 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdany-port3621555786/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdspecific-port2558775551/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (427.269475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdspecific-port2558775551/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-687153 ssh "sudo umount -f /mount-9p": exit status 1 (321.319599ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-687153 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdspecific-port2558775551/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-687153 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-687153 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-687153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2728571932/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-687153
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-687153
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-687153
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-949230 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0906 20:09:52.439037  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-949230 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.580472349s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons enable ingress --alsologtostderr -v=5
E0906 20:10:20.130680  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons enable ingress --alsologtostderr -v=5: (11.999346263s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-949230 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-546272 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0906 20:13:59.022350  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-546272 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.768008973s)
--- PASS: TestJSONOutput/start/Command (76.77s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-546272 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-546272 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-546272 --output=json --user=testUser
E0906 20:14:52.438453  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-546272 --output=json --user=testUser: (5.914573418s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-302009 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-302009 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.584186ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f83adfa7-57eb-4402-baa8-e7f90b82a047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302009] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cbc0208-53be-4ecb-9ba8-1bdb5bb65c38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17116"}}
	{"specversion":"1.0","id":"2085c7e3-4257-4a97-87b4-f8fb0a78238e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a49a4eee-7414-43c0-8b7a-2c52f9ccee4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig"}}
	{"specversion":"1.0","id":"a13c68b1-2c99-4119-b2ec-35d3ecceb87b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube"}}
	{"specversion":"1.0","id":"0e0e230b-ba81-4aad-8347-5d0c54d124d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"05694219-1d36-4e60-9b1b-812af81fc3a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c518c15-2d45-48a0-9652-24f9212195b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-302009
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-675172 --network=
E0906 20:15:20.942568  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:15:28.132993  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.138226  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.148470  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.168714  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.209112  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.289412  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.449810  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:28.770362  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:29.411644  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:30.691861  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:33.252999  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:15:38.373484  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-675172 --network=: (42.270459833s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-675172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-675172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-675172: (2.021107984s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-044721 --network=bridge
E0906 20:15:48.613622  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:16:09.093798  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-044721 --network=bridge: (32.030747497s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-044721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-044721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-044721: (2.025119109s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.08s)

                                                
                                    
x
+
TestKicExistingNetwork (34.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-850748 --network=existing-network
E0906 20:16:50.054191  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-850748 --network=existing-network: (31.857134988s)
helpers_test.go:175: Cleaning up "existing-network-850748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-850748
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-850748: (2.037598916s)
--- PASS: TestKicExistingNetwork (34.06s)

                                                
                                    
x
+
TestKicCustomSubnet (34.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-014963 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-014963 --subnet=192.168.60.0/24: (32.389518886s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-014963 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-014963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-014963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-014963: (2.067608526s)
--- PASS: TestKicCustomSubnet (34.49s)

                                                
                                    
x
+
TestKicStaticIP (39.02s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-271110 --static-ip=192.168.200.200
E0906 20:17:37.103936  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:18:04.783244  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-271110 --static-ip=192.168.200.200: (36.781881753s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-271110 ip
helpers_test.go:175: Cleaning up "static-ip-271110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-271110
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-271110: (2.037282249s)
--- PASS: TestKicStaticIP (39.02s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-984447 --driver=docker  --container-runtime=crio
E0906 20:18:11.975228  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-984447 --driver=docker  --container-runtime=crio: (35.695828711s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-987040 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-987040 --driver=docker  --container-runtime=crio: (35.806949384s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-984447
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-987040
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-987040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-987040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-987040: (2.032280902s)
helpers_test.go:175: Cleaning up "first-984447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-984447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-984447: (1.976591343s)
--- PASS: TestMinikubeProfile (76.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-811361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-811361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.150901191s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-811361 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-813478 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-813478 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.038902701s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-813478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-811361 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-811361 --alsologtostderr -v=5: (1.703252073s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-813478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-813478
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-813478: (1.231950741s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-813478
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-813478: (7.142801274s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-813478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-782472 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0906 20:20:28.132752  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:20:55.815411  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:21:15.491774  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-782472 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m38.845288208s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-782472 -- rollout status deployment/busybox: (3.39811978s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-pwl5s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-782472 -- exec busybox-5bc68d56bd-thpl6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-782472 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-782472 -v 3 --alsologtostderr: (22.892966913s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp testdata/cp-test.txt multinode-782472:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1330132084/001/cp-test_multinode-782472.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472:/home/docker/cp-test.txt multinode-782472-m02:/home/docker/cp-test_multinode-782472_multinode-782472-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test_multinode-782472_multinode-782472-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472:/home/docker/cp-test.txt multinode-782472-m03:/home/docker/cp-test_multinode-782472_multinode-782472-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test_multinode-782472_multinode-782472-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp testdata/cp-test.txt multinode-782472-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1330132084/001/cp-test_multinode-782472-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m02:/home/docker/cp-test.txt multinode-782472:/home/docker/cp-test_multinode-782472-m02_multinode-782472.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test_multinode-782472-m02_multinode-782472.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m02:/home/docker/cp-test.txt multinode-782472-m03:/home/docker/cp-test_multinode-782472-m02_multinode-782472-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test_multinode-782472-m02_multinode-782472-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp testdata/cp-test.txt multinode-782472-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1330132084/001/cp-test_multinode-782472-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m03:/home/docker/cp-test.txt multinode-782472:/home/docker/cp-test_multinode-782472-m03_multinode-782472.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472 "sudo cat /home/docker/cp-test_multinode-782472-m03_multinode-782472.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 cp multinode-782472-m03:/home/docker/cp-test.txt multinode-782472-m02:/home/docker/cp-test_multinode-782472-m03_multinode-782472-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 ssh -n multinode-782472-m02 "sudo cat /home/docker/cp-test_multinode-782472-m03_multinode-782472-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-782472 node stop m03: (1.244936584s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-782472 status: exit status 7 (562.359476ms)

                                                
                                                
-- stdout --
	multinode-782472
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-782472-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-782472-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr: exit status 7 (611.506302ms)

                                                
                                                
-- stdout --
	multinode-782472
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-782472-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-782472-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:22:20.021961  731441 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:22:20.022249  731441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:22:20.022259  731441 out.go:309] Setting ErrFile to fd 2...
	I0906 20:22:20.022266  731441 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:22:20.022618  731441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:22:20.022841  731441 out.go:303] Setting JSON to false
	I0906 20:22:20.022891  731441 mustload.go:65] Loading cluster: multinode-782472
	I0906 20:22:20.023013  731441 notify.go:220] Checking for updates...
	I0906 20:22:20.023316  731441 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:22:20.023336  731441 status.go:255] checking status of multinode-782472 ...
	I0906 20:22:20.023977  731441 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:22:20.043425  731441 status.go:330] multinode-782472 host status = "Running" (err=<nil>)
	I0906 20:22:20.043449  731441 host.go:66] Checking if "multinode-782472" exists ...
	I0906 20:22:20.043766  731441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472
	I0906 20:22:20.064926  731441 host.go:66] Checking if "multinode-782472" exists ...
	I0906 20:22:20.065227  731441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:22:20.065274  731441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472
	I0906 20:22:20.096163  731441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472/id_rsa Username:docker}
	I0906 20:22:20.197325  731441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:22:20.203407  731441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:22:20.217921  731441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:22:20.298337  731441 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-06 20:22:20.288194575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:22:20.299028  731441 kubeconfig.go:92] found "multinode-782472" server: "https://192.168.58.2:8443"
	I0906 20:22:20.299050  731441 api_server.go:166] Checking apiserver status ...
	I0906 20:22:20.299094  731441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:22:20.314271  731441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1273/cgroup
	I0906 20:22:20.328146  731441 api_server.go:182] apiserver freezer: "8:freezer:/docker/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/crio/crio-a350439d7f6ed2881f0197afa7a0f3e64a25ee036881df5310c6b74723dfe955"
	I0906 20:22:20.328229  731441 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4f96b0b3ad5d839fd8a7a05da769dc0c581f9a93b92cde73511040b7cf72780a/crio/crio-a350439d7f6ed2881f0197afa7a0f3e64a25ee036881df5310c6b74723dfe955/freezer.state
	I0906 20:22:20.339606  731441 api_server.go:204] freezer state: "THAWED"
	I0906 20:22:20.339681  731441 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0906 20:22:20.348929  731441 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0906 20:22:20.348959  731441 status.go:421] multinode-782472 apiserver status = Running (err=<nil>)
	I0906 20:22:20.348975  731441 status.go:257] multinode-782472 status: &{Name:multinode-782472 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 20:22:20.348993  731441 status.go:255] checking status of multinode-782472-m02 ...
	I0906 20:22:20.349286  731441 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Status}}
	I0906 20:22:20.368175  731441 status.go:330] multinode-782472-m02 host status = "Running" (err=<nil>)
	I0906 20:22:20.368208  731441 host.go:66] Checking if "multinode-782472-m02" exists ...
	I0906 20:22:20.368599  731441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-782472-m02
	I0906 20:22:20.387485  731441 host.go:66] Checking if "multinode-782472-m02" exists ...
	I0906 20:22:20.387817  731441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 20:22:20.387863  731441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-782472-m02
	I0906 20:22:20.413663  731441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17116-652515/.minikube/machines/multinode-782472-m02/id_rsa Username:docker}
	I0906 20:22:20.512665  731441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:22:20.526687  731441 status.go:257] multinode-782472-m02 status: &{Name:multinode-782472-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 20:22:20.526724  731441 status.go:255] checking status of multinode-782472-m03 ...
	I0906 20:22:20.527026  731441 cli_runner.go:164] Run: docker container inspect multinode-782472-m03 --format={{.State.Status}}
	I0906 20:22:20.548003  731441 status.go:330] multinode-782472-m03 host status = "Stopped" (err=<nil>)
	I0906 20:22:20.548027  731441 status.go:343] host is not running, skipping remaining checks
	I0906 20:22:20.548034  731441 status.go:257] multinode-782472-m03 status: &{Name:multinode-782472-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-782472 node start m03 --alsologtostderr: (11.990928838s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-782472
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-782472
E0906 20:22:37.103187  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-782472: (25.128850831s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-782472 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-782472 --wait=true -v=8 --alsologtostderr: (1m38.007849238s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-782472
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-782472 node delete m03: (4.370757975s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 stop
E0906 20:24:52.439176  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-782472 stop: (23.892640021s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-782472 status: exit status 7 (88.119841ms)

                                                
                                                
-- stdout --
	multinode-782472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-782472-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr: exit status 7 (88.205978ms)

                                                
                                                
-- stdout --
	multinode-782472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-782472-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:25:05.858476  739471 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:25:05.858619  739471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:05.858629  739471 out.go:309] Setting ErrFile to fd 2...
	I0906 20:25:05.858635  739471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:05.858911  739471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:25:05.859087  739471 out.go:303] Setting JSON to false
	I0906 20:25:05.859155  739471 mustload.go:65] Loading cluster: multinode-782472
	I0906 20:25:05.859258  739471 notify.go:220] Checking for updates...
	I0906 20:25:05.859538  739471 config.go:182] Loaded profile config "multinode-782472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:25:05.859548  739471 status.go:255] checking status of multinode-782472 ...
	I0906 20:25:05.860017  739471 cli_runner.go:164] Run: docker container inspect multinode-782472 --format={{.State.Status}}
	I0906 20:25:05.878906  739471 status.go:330] multinode-782472 host status = "Stopped" (err=<nil>)
	I0906 20:25:05.878928  739471 status.go:343] host is not running, skipping remaining checks
	I0906 20:25:05.878936  739471 status.go:257] multinode-782472 status: &{Name:multinode-782472 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 20:25:05.878962  739471 status.go:255] checking status of multinode-782472-m02 ...
	I0906 20:25:05.879281  739471 cli_runner.go:164] Run: docker container inspect multinode-782472-m02 --format={{.State.Status}}
	I0906 20:25:05.897494  739471 status.go:330] multinode-782472-m02 host status = "Stopped" (err=<nil>)
	I0906 20:25:05.897515  739471 status.go:343] host is not running, skipping remaining checks
	I0906 20:25:05.897522  739471 status.go:257] multinode-782472-m02 status: &{Name:multinode-782472-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-782472 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0906 20:25:28.132225  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-782472 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.339859774s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-782472 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-782472
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-782472-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-782472-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.235757ms)

                                                
                                                
-- stdout --
	* [multinode-782472-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-782472-m02' is duplicated with machine name 'multinode-782472-m02' in profile 'multinode-782472'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-782472-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-782472-m03 --driver=docker  --container-runtime=crio: (31.232606834s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-782472
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-782472: exit status 80 (346.921743ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-782472
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-782472-m03 already exists in multinode-782472-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-782472-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-782472-m03: (2.023196384s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.75s)

                                                
                                    
x
+
TestPreload (173.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-826413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0906 20:27:37.102384  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-826413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.172670586s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-826413 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-826413 image pull gcr.io/k8s-minikube/busybox: (2.175820301s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-826413
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-826413: (5.884742111s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-826413 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0906 20:29:00.169103  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:29:52.438823  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-826413 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m14.918538891s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-826413 image list
helpers_test.go:175: Cleaning up "test-preload-826413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-826413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-826413: (2.509414633s)
--- PASS: TestPreload (173.90s)

                                                
                                    
x
+
TestScheduledStopUnix (110.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-980317 --memory=2048 --driver=docker  --container-runtime=crio
E0906 20:30:28.132978  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-980317 --memory=2048 --driver=docker  --container-runtime=crio: (34.163594652s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980317 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-980317 -n scheduled-stop-980317
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980317 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980317 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980317 -n scheduled-stop-980317
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-980317
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-980317 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-980317
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-980317: exit status 7 (67.691702ms)

                                                
                                                
-- stdout --
	scheduled-stop-980317
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980317 -n scheduled-stop-980317
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-980317 -n scheduled-stop-980317: exit status 7 (68.849961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-980317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-980317
E0906 20:31:51.175680  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-980317: (4.82064088s)
--- PASS: TestScheduledStopUnix (110.69s)

                                                
                                    
x
+
TestInsufficientStorage (11.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-500291 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-500291 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.707548653s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8c735d2-e4ba-4425-ba65-b9082e904905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-500291] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a46ec1fa-d86e-4aff-9059-c18d92e0edbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17116"}}
	{"specversion":"1.0","id":"47199f29-18ad-48cc-b2b8-74dcebf60687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"304126ef-5ac8-4781-a79b-10ccd21f35d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig"}}
	{"specversion":"1.0","id":"37d9c704-fc32-4492-905d-8d28db23fd79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube"}}
	{"specversion":"1.0","id":"8a33dcf2-acb6-49a1-b866-7724b85bae18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d938b4cf-b245-424a-b044-04e2e17d1ba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3435eca6-1f98-4006-8e85-2de2916c7180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d29d8bd4-79a5-4a13-a500-d384b4f1743d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dd4e59c9-6aa2-46df-b977-6c711a41efa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bb294bb-5236-4ad4-bcf0-e2c00339f746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ba0d414f-2283-4ab2-aff0-f942b1bb6b9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-500291 in cluster insufficient-storage-500291","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"14d19797-8cf1-41d9-9664-1a0f5fda1b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b65b54b2-062c-4541-a2b1-d704c9503d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9af7ee0f-7d58-4236-ba2c-76b8d3e33097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-500291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-500291 --output=json --layout=cluster: exit status 7 (331.092401ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-500291","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-500291","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 20:32:01.039691  756268 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-500291" does not appear in /home/jenkins/minikube-integration/17116-652515/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-500291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-500291 --output=json --layout=cluster: exit status 7 (327.946231ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-500291","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-500291","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 20:32:01.368753  756319 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-500291" does not appear in /home/jenkins/minikube-integration/17116-652515/kubeconfig
	E0906 20:32:01.381090  756319 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/insufficient-storage-500291/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-500291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-500291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-500291: (1.933317688s)
--- PASS: TestInsufficientStorage (11.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (398.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0906 20:34:52.439665  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.35815369s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-680277
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-680277: (1.376637888s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-680277 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-680277 status --format={{.Host}}: exit status 7 (84.746751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m57.202395449s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-680277 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (146.280021ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-680277] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-680277
	    minikube start -p kubernetes-upgrade-680277 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6802772 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-680277 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-680277 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.073871291s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-680277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-680277
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-680277: (2.320147323s)
--- PASS: TestKubernetesUpgrade (398.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (105.088391ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-063967] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (90.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-056574 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-056574 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.918499938s)
--- PASS: TestPause/serial/Start (90.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063967 --driver=docker  --container-runtime=crio
E0906 20:32:37.102166  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063967 --driver=docker  --container-runtime=crio: (45.887103747s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063967 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --driver=docker  --container-runtime=crio: (21.296168894s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063967 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-063967 status -o json: exit status 2 (334.3727ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-063967","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-063967
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-063967: (2.036425711s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063967 --no-kubernetes --driver=docker  --container-runtime=crio: (6.976179589s)
--- PASS: TestNoKubernetes/serial/Start (6.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.76833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-063967
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-063967: (1.239546299s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063967 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063967 --driver=docker  --container-runtime=crio: (8.139096197s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (331.26511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-877553
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-875195 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-875195 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (254.730074ms)

                                                
                                                
-- stdout --
	* [false-875195] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:39:35.326393  792259 out.go:296] Setting OutFile to fd 1 ...
	I0906 20:39:35.326615  792259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:39:35.326640  792259 out.go:309] Setting ErrFile to fd 2...
	I0906 20:39:35.326658  792259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 20:39:35.326991  792259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17116-652515/.minikube/bin
	I0906 20:39:35.327449  792259 out.go:303] Setting JSON to false
	I0906 20:39:35.328553  792259 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11930,"bootTime":1694020846,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 20:39:35.328659  792259 start.go:138] virtualization:  
	I0906 20:39:35.331417  792259 out.go:177] * [false-875195] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0906 20:39:35.334147  792259 notify.go:220] Checking for updates...
	I0906 20:39:35.337993  792259 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 20:39:35.339846  792259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:39:35.341735  792259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17116-652515/kubeconfig
	I0906 20:39:35.344623  792259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17116-652515/.minikube
	I0906 20:39:35.346772  792259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 20:39:35.348480  792259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:39:35.350972  792259 config.go:182] Loaded profile config "kubernetes-upgrade-680277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0906 20:39:35.351139  792259 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 20:39:35.378867  792259 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0906 20:39:35.378975  792259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 20:39:35.497695  792259 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-06 20:39:35.48800424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0906 20:39:35.497796  792259 docker.go:294] overlay module found
	I0906 20:39:35.499817  792259 out.go:177] * Using the docker driver based on user configuration
	I0906 20:39:35.503175  792259 start.go:298] selected driver: docker
	I0906 20:39:35.503193  792259 start.go:902] validating driver "docker" against <nil>
	I0906 20:39:35.503206  792259 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:39:35.505846  792259 out.go:177] 
	W0906 20:39:35.508001  792259 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0906 20:39:35.509856  792259 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-875195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Sep 2023 20:36:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-680277
contexts:
- context:
cluster: kubernetes-upgrade-680277
user: kubernetes-upgrade-680277
name: kubernetes-upgrade-680277
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-680277
user:
client-certificate: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.crt
client-key: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-875195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-875195"

                                                
                                                
----------------------- debugLogs end: false-875195 [took: 3.742573872s] --------------------------------
helpers_test.go:175: Cleaning up "false-875195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-875195
--- PASS: TestNetworkPlugins/group/false (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-636595 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-636595 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m9.005168898s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636595 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1d734c3-d8bb-49d2-b2b5-b176e717daf5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1d734c3-d8bb-49d2-b2b5-b176e717daf5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.036931068s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636595 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-636595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-636595 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-636595 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-636595 --alsologtostderr -v=3: (12.203980921s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636595 -n old-k8s-version-636595
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636595 -n old-k8s-version-636595: exit status 7 (73.910287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-636595 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (429.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-636595 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-636595 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m9.117776505s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636595 -n old-k8s-version-636595
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (429.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-370162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 20:45:28.132210  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:45:40.170313  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-370162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m1.040850826s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-370162 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce514592-3d03-41ef-b1ff-928d8c85ccf2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce514592-3d03-41ef-b1ff-928d8c85ccf2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.029590788s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-370162 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-370162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-370162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084858855s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-370162 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-370162 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-370162 --alsologtostderr -v=3: (12.203467332s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-370162 -n no-preload-370162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-370162 -n no-preload-370162: exit status 7 (86.0734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-370162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-370162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 20:47:37.101844  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:48:31.175922  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:49:52.439005  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:50:28.132005  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-370162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m46.242779393s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-370162 -n no-preload-370162
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7sn2l" [9bf68d28-4462-42ff-9135-90aa48279155] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026806358s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7sn2l" [9bf68d28-4462-42ff-9135-90aa48279155] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01540465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-636595 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-636595 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-636595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636595 -n old-k8s-version-636595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636595 -n old-k8s-version-636595: exit status 2 (363.211703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636595 -n old-k8s-version-636595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636595 -n old-k8s-version-636595: exit status 2 (363.895043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-636595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636595 -n old-k8s-version-636595
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636595 -n old-k8s-version-636595
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-230951 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-230951 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m21.218378158s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c5sdb" [087b2481-a0c4-4015-acd0-d0fabcd335a0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c5sdb" [087b2481-a0c4-4015-acd0-d0fabcd335a0] Running
E0906 20:52:37.101767  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.037049306s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c5sdb" [087b2481-a0c4-4015-acd0-d0fabcd335a0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.03065202s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-370162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-370162 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-370162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-370162 -n no-preload-370162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-370162 -n no-preload-370162: exit status 2 (405.87957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-370162 -n no-preload-370162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-370162 -n no-preload-370162: exit status 2 (472.219163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-370162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-370162 -n no-preload-370162
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-370162 -n no-preload-370162
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-230951 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a6a5ba5d-6375-4d30-889d-38e37bb77e2d] Pending
helpers_test.go:344: "busybox" [a6a5ba5d-6375-4d30-889d-38e37bb77e2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a6a5ba5d-6375-4d30-889d-38e37bb77e2d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.047628046s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-230951 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-055003 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-055003 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m23.219168793s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-230951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-230951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.311388226s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-230951 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-230951 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-230951 --alsologtostderr -v=3: (12.147053084s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230951 -n embed-certs-230951
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230951 -n embed-certs-230951: exit status 7 (116.517695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-230951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (354.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-230951 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 20:53:38.794192  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:38.799423  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:38.809666  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:38.829918  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:38.870171  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:38.950417  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:39.110741  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:39.431531  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:40.071947  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:41.352161  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:43.912620  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:49.032975  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:53:59.273789  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-230951 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m54.249683307s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-230951 -n embed-certs-230951
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (354.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-055003 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2b8d2c6-14b4-4b55-af76-f8c42f194b59] Pending
helpers_test.go:344: "busybox" [e2b8d2c6-14b4-4b55-af76-f8c42f194b59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0906 20:54:19.754712  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e2b8d2c6-14b4-4b55-af76-f8c42f194b59] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.037373026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-055003 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-055003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-055003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144837595s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-055003 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-055003 --alsologtostderr -v=3
E0906 20:54:35.493171  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-055003 --alsologtostderr -v=3: (12.094040622s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003: exit status 7 (99.313846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-055003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (359.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-055003 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 20:54:52.438171  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 20:55:00.714951  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:55:28.133080  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 20:56:19.400984  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.406363  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.416691  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.436993  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.477303  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.557609  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:19.718131  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:20.038564  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:20.679256  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:21.959480  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:22.635445  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:56:24.519712  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:29.640588  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:56:39.880978  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:57:00.361544  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:57:37.102286  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
E0906 20:57:41.321763  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:58:38.794271  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
E0906 20:59:03.242862  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 20:59:06.476395  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-055003 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m58.423833184s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (359.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hthlh" [04f56d6b-9772-448b-8209-bb3e6c9b34fd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hthlh" [04f56d6b-9772-448b-8209-bb3e6c9b34fd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.0286003s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hthlh" [04f56d6b-9772-448b-8209-bb3e6c9b34fd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012533789s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-230951 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-230951 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-230951 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230951 -n embed-certs-230951
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230951 -n embed-certs-230951: exit status 2 (371.91303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230951 -n embed-certs-230951
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230951 -n embed-certs-230951: exit status 2 (339.084439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-230951 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-230951 -n embed-certs-230951
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-230951 -n embed-certs-230951
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-394535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 20:59:52.438295  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-394535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (48.519992987s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-394535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-394535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.497922564s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-394535 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-394535 --alsologtostderr -v=3: (1.333869584s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-394535 -n newest-cni-394535
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-394535 -n newest-cni-394535: exit status 7 (98.034431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-394535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-394535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0906 21:00:28.132773  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-394535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (34.892157755s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-394535 -n newest-cni-394535
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sstst" [792b3fee-07c4-4ec1-99af-1a5d75b0044a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sstst" [792b3fee-07c4-4ec1-99af-1a5d75b0044a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.04156201s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sstst" [792b3fee-07c4-4ec1-99af-1a5d75b0044a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014505563s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-055003 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-394535 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-394535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-394535 -n newest-cni-394535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-394535 -n newest-cni-394535: exit status 2 (356.090103ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-394535 -n newest-cni-394535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-394535 -n newest-cni-394535: exit status 2 (376.537889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-394535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-394535 --alsologtostderr -v=1: (1.027451208s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-394535 -n newest-cni-394535
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-394535 -n newest-cni-394535
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-055003 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-055003 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-055003 --alsologtostderr -v=1: (1.311986789s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003: exit status 2 (460.291762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003: exit status 2 (452.129874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-055003 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-055003 --alsologtostderr -v=1: (1.08068593s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003: (1.010130945s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-055003 -n default-k8s-diff-port-055003
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.97s)
E0906 21:07:08.688829  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:08.694134  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:08.704422  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:08.724693  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:08.764988  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:08.845459  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:09.006365  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:09.327039  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:09.968058  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:11.249138  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:13.809861  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:18.930830  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:29.171643  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
E0906 21:07:31.363992  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.369199  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.379446  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.399713  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.439992  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.520305  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:31.680759  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:32.002151  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:32.642378  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:33.922799  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:36.483750  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
E0906 21:07:37.101887  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.950126961s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0906 21:01:19.400867  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
E0906 21:01:47.083366  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (56.291246189s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cd2lj" [369f8477-ac09-4c4a-8294-5244f62cd14d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.036926467s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7l28c" [36feb1c6-da02-4dcd-92b9-c5bd2e4a1f4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7l28c" [36feb1c6-da02-4dcd-92b9-c5bd2e4a1f4b] Running
E0906 21:02:20.170585  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.013373777s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5n5zk" [03c866bf-bb9e-4593-b8b3-e4e4e3b42e2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 21:02:37.101642  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/functional-687153/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5n5zk" [03c866bf-bb9e-4593-b8b3-e4e4e3b42e2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.022678851s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.194210956s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0906 21:03:38.794221  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/old-k8s-version-636595/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m14.015176792s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wgvjp" [c1e334b0-135e-4e20-b7a2-1a1eeb051c3b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.08283754s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hw2sb" [70b0721f-073d-4058-bd59-ee4f6b278e03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 21:04:16.329826  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.335373  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.345796  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.366432  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.407056  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.487322  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.647509  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:16.968062  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:17.608693  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hw2sb" [70b0721f-073d-4058-bd59-ee4f6b278e03] Running
E0906 21:04:18.889385  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
E0906 21:04:21.449826  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.016287969s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mw5xm" [cdcf81f0-5166-452e-8b08-1dfb25b4ae8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 21:04:26.570383  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-mw5xm" [cdcf81f0-5166-452e-8b08-1dfb25b4ae8c] Running
E0906 21:04:36.810806  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.010357279s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0906 21:04:52.438336  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/addons-342654/client.crt: no such file or directory
E0906 21:04:57.291353  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m24.496230395s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0906 21:05:11.176858  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 21:05:28.132942  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/ingress-addon-legacy-949230/client.crt: no such file or directory
E0906 21:05:38.251709  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.963441111s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5xbzd" [db77a59d-a273-4231-ab71-2f92048750a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 21:06:19.400870  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/no-preload-370162/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5xbzd" [db77a59d-a273-4231-ab71-2f92048750a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012003922s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j2mw5" [e665bbf3-f580-4463-847c-6ae4fa380682] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.032146415s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s5gsx" [601e9556-3d3e-4d4c-bbfd-314e3f99df3a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s5gsx" [601e9556-3d3e-4d4c-bbfd-314e3f99df3a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.013249984s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-875195 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0906 21:07:00.172119  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/default-k8s-diff-port-055003/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-875195 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (47.283014381s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-875195 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-875195 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k4g92" [c9f4d5f1-e9e7-4cb6-802b-c864d9995aef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 21:07:41.604618  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-k4g92" [c9f4d5f1-e9e7-4cb6-802b-c864d9995aef] Running
E0906 21:07:49.652440  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kindnet-875195/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.01238834s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-875195 exec deployment/netcat -- nslookup kubernetes.default
E0906 21:07:51.846091  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-875195 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.220875811s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-875195 exec deployment/netcat -- nslookup kubernetes.default
E0906 21:08:12.326352  657900 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/auto-875195/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-875195 exec deployment/netcat -- nslookup kubernetes.default: (10.221323334s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-875195 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (29/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-337090 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-337090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-337090
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-827845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-827845
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-875195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Sep 2023 20:36:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-680277
contexts:
- context:
cluster: kubernetes-upgrade-680277
user: kubernetes-upgrade-680277
name: kubernetes-upgrade-680277
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-680277
user:
client-certificate: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.crt
client-key: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-875195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-875195"

                                                
                                                
----------------------- debugLogs end: kubenet-875195 [took: 3.558734175s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-875195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-875195
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-875195 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-875195" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17116-652515/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Sep 2023 20:36:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-680277
contexts:
- context:
cluster: kubernetes-upgrade-680277
user: kubernetes-upgrade-680277
name: kubernetes-upgrade-680277
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-680277
user:
client-certificate: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.crt
client-key: /home/jenkins/minikube-integration/17116-652515/.minikube/profiles/kubernetes-upgrade-680277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-875195

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-875195" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-875195"

                                                
                                                
----------------------- debugLogs end: cilium-875195 [took: 4.499821827s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-875195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-875195
--- SKIP: TestNetworkPlugins/group/cilium (4.71s)

                                                
                                    
Copied to clipboard