Test Report: Docker_Linux_crio_arm64 16968

                    
                      3b33420a0c9ae0948b181bc91d502671e4007a23:2023-07-31:30376
                    
                

Test fail (7/298)

Order failed test Duration
25 TestAddons/parallel/Ingress 169.05
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.59
204 TestMultiNode/serial/PingHostFrom2Pods 4.31
225 TestRunningBinaryUpgrade 70.76
228 TestMissingContainerUpgrade 176.78
240 TestStoppedBinaryUpgrade/Upgrade 91.53
251 TestPause/serial/SecondStartNoReconfiguration 53.18
x
+
TestAddons/parallel/Ingress (169.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-708039 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-708039 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-708039 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8674629c-3b4c-4fd5-86ac-351c683bfad4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8674629c-3b4c-4fd5-86ac-351c683bfad4] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.028686168s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-708039 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.023332517s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-708039 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-708039 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.038314593s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.046812223s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-708039 addons disable ingress --alsologtostderr -v=1: (7.828392096s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-708039
helpers_test.go:235: (dbg) docker inspect addons-708039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b",
	        "Created": "2023-07-31T11:48:13.528039673Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 853503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:48:13.86493216Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/hosts",
	        "LogPath": "/var/lib/docker/containers/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b-json.log",
	        "Name": "/addons-708039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-708039:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-708039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/980156b11bf1ef44c00a522f46bfb92d1d12490376dde266a066b9c2ff61a405-init/diff:/var/lib/docker/overlay2/ea390dfb8f8baaae26b2c19880bf5069405274e04629daebd3f048abbe32d27b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/980156b11bf1ef44c00a522f46bfb92d1d12490376dde266a066b9c2ff61a405/merged",
	                "UpperDir": "/var/lib/docker/overlay2/980156b11bf1ef44c00a522f46bfb92d1d12490376dde266a066b9c2ff61a405/diff",
	                "WorkDir": "/var/lib/docker/overlay2/980156b11bf1ef44c00a522f46bfb92d1d12490376dde266a066b9c2ff61a405/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-708039",
	                "Source": "/var/lib/docker/volumes/addons-708039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-708039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-708039",
	                "name.minikube.sigs.k8s.io": "addons-708039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "717e364423abe8bc9376a1c90d737458182c7f20014ac2a2e88788f51c35792c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35840"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35839"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35838"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/717e364423ab",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-708039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a6cdce0fdc3c",
	                        "addons-708039"
	                    ],
	                    "NetworkID": "3cbc3c122435e5ec1c71b6e6fd46d325901acc0980ff56337c2e78edb0338931",
	                    "EndpointID": "c56e9e880708cc5f9b6f53906c140ee2c744f7bf5ef6a26f57f66d2da8b55fa7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-708039 -n addons-708039
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-708039 logs -n 25: (1.635555768s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-593678   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |                     |
	|         | -p download-only-593678        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-593678   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |                     |
	|         | -p download-only-593678        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:47 UTC |
	| delete  | -p download-only-593678        | download-only-593678   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:47 UTC |
	| delete  | -p download-only-593678        | download-only-593678   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:47 UTC |
	| start   | --download-only -p             | download-docker-307334 | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |                     |
	|         | download-docker-307334         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-307334      | download-docker-307334 | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:47 UTC |
	| start   | --download-only -p             | binary-mirror-265698   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |                     |
	|         | binary-mirror-265698           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40293         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-265698        | binary-mirror-265698   | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:47 UTC |
	| start   | -p addons-708039               | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC | 31 Jul 23 11:50 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	|         | addons-708039                  |                        |         |         |                     |                     |
	| addons  | addons-708039 addons           | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-708039 ip               | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	| addons  | addons-708039 addons disable   | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	|         | addons-708039                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC | 31 Jul 23 11:50 UTC |
	|         | -p addons-708039               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh     | addons-708039 ssh curl -s      | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-708039 addons           | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:51 UTC | 31 Jul 23 11:52 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-708039 addons           | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:52 UTC | 31 Jul 23 11:52 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-708039 ip               | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:53 UTC | 31 Jul 23 11:53 UTC |
	| addons  | addons-708039 addons disable   | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:53 UTC | 31 Jul 23 11:53 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-708039 addons disable   | addons-708039          | jenkins | v1.31.1 | 31 Jul 23 11:53 UTC | 31 Jul 23 11:53 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:47:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:47:50.317950  853047 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:47:50.318077  853047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:50.318085  853047 out.go:309] Setting ErrFile to fd 2...
	I0731 11:47:50.318090  853047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:50.318379  853047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 11:47:50.318813  853047 out.go:303] Setting JSON to false
	I0731 11:47:50.319796  853047 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70218,"bootTime":1690733853,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:47:50.319865  853047 start.go:138] virtualization:  
	I0731 11:47:50.322134  853047 out.go:177] * [addons-708039] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:47:50.324154  853047 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:47:50.325802  853047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:47:50.324385  853047 notify.go:220] Checking for updates...
	I0731 11:47:50.329181  853047 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:47:50.330714  853047 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:47:50.332269  853047 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:47:50.333711  853047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:47:50.335511  853047 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:47:50.362772  853047 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:47:50.362877  853047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:50.455656  853047 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 11:47:50.44512457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:50.455764  853047 docker.go:294] overlay module found
	I0731 11:47:50.457587  853047 out.go:177] * Using the docker driver based on user configuration
	I0731 11:47:50.459931  853047 start.go:298] selected driver: docker
	I0731 11:47:50.459947  853047 start.go:898] validating driver "docker" against <nil>
	I0731 11:47:50.459961  853047 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:47:50.460616  853047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:50.529499  853047 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-31 11:47:50.518141205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:50.529678  853047 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 11:47:50.529902  853047 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:47:50.531991  853047 out.go:177] * Using Docker driver with root privileges
	I0731 11:47:50.533762  853047 cni.go:84] Creating CNI manager for ""
	I0731 11:47:50.533784  853047 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:47:50.533794  853047 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:47:50.533805  853047 start_flags.go:319] config:
	{Name:addons-708039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-708039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:47:50.537030  853047 out.go:177] * Starting control plane node addons-708039 in cluster addons-708039
	I0731 11:47:50.538974  853047 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:47:50.540718  853047 out.go:177] * Pulling base image ...
	I0731 11:47:50.543029  853047 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:47:50.543093  853047 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 11:47:50.543101  853047 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:47:50.543107  853047 cache.go:57] Caching tarball of preloaded images
	I0731 11:47:50.543225  853047 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 11:47:50.543234  853047 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 11:47:50.543575  853047 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/config.json ...
	I0731 11:47:50.543603  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/config.json: {Name:mk184637ef0d1d8528bc843d6c9398ce2e9effd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:47:50.560733  853047 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 11:47:50.560838  853047 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 11:47:50.560860  853047 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 11:47:50.560868  853047 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 11:47:50.560876  853047 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 11:47:50.560885  853047 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0731 11:48:06.480198  853047 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0731 11:48:06.480235  853047 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:48:06.480286  853047 start.go:365] acquiring machines lock for addons-708039: {Name:mk0eb8987dcee277bfcceebb2b9504f8a5ad3b92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:48:06.480772  853047 start.go:369] acquired machines lock for "addons-708039" in 461.101µs
	I0731 11:48:06.480810  853047 start.go:93] Provisioning new machine with config: &{Name:addons-708039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-708039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:48:06.480907  853047 start.go:125] createHost starting for "" (driver="docker")
	I0731 11:48:06.482847  853047 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0731 11:48:06.483095  853047 start.go:159] libmachine.API.Create for "addons-708039" (driver="docker")
	I0731 11:48:06.483123  853047 client.go:168] LocalClient.Create starting
	I0731 11:48:06.483254  853047 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 11:48:06.697468  853047 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 11:48:07.322941  853047 cli_runner.go:164] Run: docker network inspect addons-708039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:48:07.340317  853047 cli_runner.go:211] docker network inspect addons-708039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:48:07.340408  853047 network_create.go:281] running [docker network inspect addons-708039] to gather additional debugging logs...
	I0731 11:48:07.340431  853047 cli_runner.go:164] Run: docker network inspect addons-708039
	W0731 11:48:07.358680  853047 cli_runner.go:211] docker network inspect addons-708039 returned with exit code 1
	I0731 11:48:07.358713  853047 network_create.go:284] error running [docker network inspect addons-708039]: docker network inspect addons-708039: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-708039 not found
	I0731 11:48:07.358729  853047 network_create.go:286] output of [docker network inspect addons-708039]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-708039 not found
	
	** /stderr **
	I0731 11:48:07.358791  853047 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:48:07.377033  853047 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000c9cc60}
	I0731 11:48:07.377082  853047 network_create.go:123] attempt to create docker network addons-708039 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 11:48:07.377142  853047 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-708039 addons-708039
	I0731 11:48:07.449094  853047 network_create.go:107] docker network addons-708039 192.168.49.0/24 created
	I0731 11:48:07.449125  853047 kic.go:117] calculated static IP "192.168.49.2" for the "addons-708039" container
	I0731 11:48:07.449201  853047 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:48:07.465679  853047 cli_runner.go:164] Run: docker volume create addons-708039 --label name.minikube.sigs.k8s.io=addons-708039 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:48:07.486200  853047 oci.go:103] Successfully created a docker volume addons-708039
	I0731 11:48:07.486295  853047 cli_runner.go:164] Run: docker run --rm --name addons-708039-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-708039 --entrypoint /usr/bin/test -v addons-708039:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 11:48:09.325989  853047 cli_runner.go:217] Completed: docker run --rm --name addons-708039-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-708039 --entrypoint /usr/bin/test -v addons-708039:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.839654569s)
	I0731 11:48:09.326021  853047 oci.go:107] Successfully prepared a docker volume addons-708039
	I0731 11:48:09.326061  853047 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:48:09.326081  853047 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:48:09.326176  853047 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-708039:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:48:13.442439  853047 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-708039:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.116217994s)
	I0731 11:48:13.442471  853047 kic.go:199] duration metric: took 4.116385 seconds to extract preloaded images to volume
	W0731 11:48:13.442636  853047 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:48:13.442795  853047 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:48:13.510079  853047 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-708039 --name addons-708039 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-708039 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-708039 --network addons-708039 --ip 192.168.49.2 --volume addons-708039:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:48:13.873819  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Running}}
	I0731 11:48:13.902901  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:13.936294  853047 cli_runner.go:164] Run: docker exec addons-708039 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:48:14.014986  853047 oci.go:144] the created container "addons-708039" has a running status.
	I0731 11:48:14.015020  853047 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa...
	I0731 11:48:14.404512  853047 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:48:14.428444  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:14.450237  853047 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:48:14.450263  853047 kic_runner.go:114] Args: [docker exec --privileged addons-708039 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:48:14.552422  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:14.578713  853047 machine.go:88] provisioning docker machine ...
	I0731 11:48:14.578747  853047 ubuntu.go:169] provisioning hostname "addons-708039"
	I0731 11:48:14.578817  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:14.614772  853047 main.go:141] libmachine: Using SSH client type: native
	I0731 11:48:14.615877  853047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35841 <nil> <nil>}
	I0731 11:48:14.615898  853047 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-708039 && echo "addons-708039" | sudo tee /etc/hostname
	I0731 11:48:14.616988  853047 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38916->127.0.0.1:35841: read: connection reset by peer
	I0731 11:48:17.760994  853047 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-708039
	
	I0731 11:48:17.761082  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:17.781513  853047 main.go:141] libmachine: Using SSH client type: native
	I0731 11:48:17.781947  853047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35841 <nil> <nil>}
	I0731 11:48:17.781972  853047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-708039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-708039/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-708039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:48:17.913475  853047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:48:17.913498  853047 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 11:48:17.913516  853047 ubuntu.go:177] setting up certificates
	I0731 11:48:17.913524  853047 provision.go:83] configureAuth start
	I0731 11:48:17.913584  853047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-708039
	I0731 11:48:17.931137  853047 provision.go:138] copyHostCerts
	I0731 11:48:17.931222  853047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 11:48:17.931351  853047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 11:48:17.931417  853047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 11:48:17.931475  853047 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.addons-708039 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-708039]
	I0731 11:48:18.844075  853047 provision.go:172] copyRemoteCerts
	I0731 11:48:18.844181  853047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:48:18.844225  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:18.861785  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:18.959308  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:48:18.987861  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 11:48:19.018134  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 11:48:19.046841  853047 provision.go:86] duration metric: configureAuth took 1.133303119s
	I0731 11:48:19.046865  853047 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:48:19.047056  853047 config.go:182] Loaded profile config "addons-708039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:48:19.047162  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:19.065517  853047 main.go:141] libmachine: Using SSH client type: native
	I0731 11:48:19.065998  853047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35841 <nil> <nil>}
	I0731 11:48:19.066023  853047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:48:19.312511  853047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:48:19.312534  853047 machine.go:91] provisioned docker machine in 4.733797679s
	I0731 11:48:19.312544  853047 client.go:171] LocalClient.Create took 12.829413646s
	I0731 11:48:19.312560  853047 start.go:167] duration metric: libmachine.API.Create for "addons-708039" took 12.82946483s
	I0731 11:48:19.312568  853047 start.go:300] post-start starting for "addons-708039" (driver="docker")
	I0731 11:48:19.312581  853047 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:48:19.312659  853047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:48:19.312707  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:19.333353  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:19.427073  853047 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:48:19.431276  853047 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:48:19.431314  853047 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:48:19.431325  853047 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:48:19.431333  853047 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 11:48:19.431343  853047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 11:48:19.431411  853047 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 11:48:19.431438  853047 start.go:303] post-start completed in 118.860417ms
	I0731 11:48:19.431751  853047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-708039
	I0731 11:48:19.448958  853047 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/config.json ...
	I0731 11:48:19.449255  853047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:48:19.449301  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:19.466747  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:19.558259  853047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:48:19.564086  853047 start.go:128] duration metric: createHost completed in 13.083164394s
	I0731 11:48:19.564124  853047 start.go:83] releasing machines lock for "addons-708039", held for 13.083335946s
	I0731 11:48:19.564199  853047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-708039
	I0731 11:48:19.581546  853047 ssh_runner.go:195] Run: cat /version.json
	I0731 11:48:19.581585  853047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:48:19.581597  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:19.581651  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:19.601105  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:19.613574  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:19.688831  853047 ssh_runner.go:195] Run: systemctl --version
	I0731 11:48:19.830954  853047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:48:19.980442  853047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:48:19.986280  853047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:48:20.025274  853047 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:48:20.025357  853047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:48:20.065182  853047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 11:48:20.065205  853047 start.go:466] detecting cgroup driver to use...
	I0731 11:48:20.065238  853047 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:48:20.065293  853047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:48:20.085240  853047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:48:20.101340  853047 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:48:20.101410  853047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:48:20.119514  853047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:48:20.142806  853047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 11:48:20.246409  853047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:48:20.350650  853047 docker.go:212] disabling docker service ...
	I0731 11:48:20.350720  853047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:48:20.373385  853047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:48:20.387435  853047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:48:20.491482  853047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:48:20.595801  853047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:48:20.609738  853047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:48:20.630991  853047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 11:48:20.631083  853047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:48:20.643402  853047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 11:48:20.643501  853047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:48:20.655769  853047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:48:20.668255  853047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:48:20.680437  853047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:48:20.691970  853047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:48:20.702844  853047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:48:20.713856  853047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:48:20.804001  853047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 11:48:20.924994  853047 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 11:48:20.925154  853047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 11:48:20.930621  853047 start.go:534] Will wait 60s for crictl version
	I0731 11:48:20.930732  853047 ssh_runner.go:195] Run: which crictl
	I0731 11:48:20.935315  853047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:48:20.984871  853047 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 11:48:20.984969  853047 ssh_runner.go:195] Run: crio --version
	I0731 11:48:21.041097  853047 ssh_runner.go:195] Run: crio --version
	I0731 11:48:21.090520  853047 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 11:48:21.092359  853047 cli_runner.go:164] Run: docker network inspect addons-708039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:48:21.113756  853047 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 11:48:21.118891  853047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:48:21.133213  853047 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:48:21.133287  853047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:48:21.195622  853047 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 11:48:21.195644  853047 crio.go:415] Images already preloaded, skipping extraction
	I0731 11:48:21.195701  853047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:48:21.241364  853047 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 11:48:21.241385  853047 cache_images.go:84] Images are preloaded, skipping loading
	I0731 11:48:21.241457  853047 ssh_runner.go:195] Run: crio config
	I0731 11:48:21.301115  853047 cni.go:84] Creating CNI manager for ""
	I0731 11:48:21.301138  853047 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:48:21.301148  853047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:48:21.301165  853047 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-708039 NodeName:addons-708039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 11:48:21.301309  853047 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-708039"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 11:48:21.301377  853047 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-708039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-708039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:48:21.301448  853047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 11:48:21.312515  853047 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:48:21.312591  853047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 11:48:21.323119  853047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0731 11:48:21.343827  853047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 11:48:21.364497  853047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0731 11:48:21.384643  853047 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:48:21.389173  853047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:48:21.402692  853047 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039 for IP: 192.168.49.2
	I0731 11:48:21.402725  853047 certs.go:190] acquiring lock for shared ca certs: {Name:mk762e840a818dea6b5e9edfaa8822eb28411d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:21.402867  853047 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key
	I0731 11:48:21.983064  853047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt ...
	I0731 11:48:21.983095  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt: {Name:mkdfb6cf992296f246d1c0fb22e13d74157a9846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:21.983683  853047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key ...
	I0731 11:48:21.983700  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key: {Name:mk03e33294f42bc73e7095723ad6736f06bc9ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:21.983801  853047 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key
	I0731 11:48:22.410184  853047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt ...
	I0731 11:48:22.410215  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt: {Name:mkdabcdd76497f5a0dd634e8e8c42b926e8dc44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.410408  853047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key ...
	I0731 11:48:22.410421  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key: {Name:mk4ae129d8516546f29b302201a2ff4080838439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.410540  853047 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.key
	I0731 11:48:22.410582  853047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt with IP's: []
	I0731 11:48:22.530512  853047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt ...
	I0731 11:48:22.530539  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: {Name:mk75e2ad9a6af581848a4624c2e97ac004ce6c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.531235  853047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.key ...
	I0731 11:48:22.531252  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.key: {Name:mk8f14135e3b2f794078b4c7bb19c9af38f2c4b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.531672  853047 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key.dd3b5fb2
	I0731 11:48:22.531695  853047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 11:48:22.850285  853047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt.dd3b5fb2 ...
	I0731 11:48:22.850315  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt.dd3b5fb2: {Name:mkccd6b6fc677e7c7f896337c96b93635c86e504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.850874  853047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key.dd3b5fb2 ...
	I0731 11:48:22.850889  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key.dd3b5fb2: {Name:mke15015e09210c738fe90b93d0b2a5703ed8038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:22.851392  853047 certs.go:337] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt
	I0731 11:48:22.851466  853047 certs.go:341] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key
	I0731 11:48:22.851515  853047 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.key
	I0731 11:48:22.851535  853047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.crt with IP's: []
	I0731 11:48:23.436252  853047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.crt ...
	I0731 11:48:23.436287  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.crt: {Name:mk385bc2d4a5c3e2d80578cd6b117c256fce7852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:23.436495  853047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.key ...
	I0731 11:48:23.436509  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.key: {Name:mk837c9c08033ef687440c1d7731582888088148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:23.437128  853047 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 11:48:23.437207  853047 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:48:23.437246  853047 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:48:23.437276  853047 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem (1679 bytes)
	I0731 11:48:23.437883  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 11:48:23.468167  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 11:48:23.498895  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 11:48:23.528728  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 11:48:23.558321  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:48:23.587646  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 11:48:23.616782  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:48:23.646577  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 11:48:23.675534  853047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:48:23.704745  853047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 11:48:23.726669  853047 ssh_runner.go:195] Run: openssl version
	I0731 11:48:23.733821  853047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:48:23.745984  853047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:48:23.751019  853047 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:48:23.751096  853047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:48:23.759798  853047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:48:23.771513  853047 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 11:48:23.775909  853047 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:48:23.775956  853047 kubeadm.go:404] StartCluster: {Name:addons-708039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-708039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:48:23.776036  853047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 11:48:23.776089  853047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 11:48:23.818219  853047 cri.go:89] found id: ""
	I0731 11:48:23.818332  853047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 11:48:23.829460  853047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 11:48:23.840530  853047 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 11:48:23.840599  853047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 11:48:23.851093  853047 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:48:23.851144  853047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 11:48:23.907095  853047 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 11:48:23.907178  853047 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 11:48:23.954314  853047 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:48:23.954385  853047 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0731 11:48:23.954422  853047 kubeadm.go:322] OS: Linux
	I0731 11:48:23.954470  853047 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 11:48:23.954520  853047 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 11:48:23.954567  853047 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 11:48:23.954616  853047 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 11:48:23.954665  853047 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 11:48:23.954716  853047 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 11:48:23.954760  853047 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0731 11:48:23.954809  853047 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0731 11:48:23.954874  853047 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0731 11:48:24.040276  853047 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 11:48:24.040490  853047 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 11:48:24.040633  853047 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 11:48:24.295079  853047 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 11:48:24.298215  853047 out.go:204]   - Generating certificates and keys ...
	I0731 11:48:24.298462  853047 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 11:48:24.298531  853047 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 11:48:24.791473  853047 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 11:48:25.192499  853047 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 11:48:25.731723  853047 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 11:48:26.198101  853047 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 11:48:26.438572  853047 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 11:48:26.439038  853047 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-708039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:48:27.329301  853047 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 11:48:27.329698  853047 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-708039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:48:27.658995  853047 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 11:48:28.135913  853047 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 11:48:28.395759  853047 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 11:48:28.396176  853047 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 11:48:28.704412  853047 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 11:48:29.456248  853047 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 11:48:30.979873  853047 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 11:48:31.620790  853047 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 11:48:31.631991  853047 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:48:31.633399  853047 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:48:31.633477  853047 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 11:48:31.746684  853047 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 11:48:31.748769  853047 out.go:204]   - Booting up control plane ...
	I0731 11:48:31.748870  853047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 11:48:31.754949  853047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 11:48:31.756429  853047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 11:48:31.757600  853047 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 11:48:31.760448  853047 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 11:48:39.763181  853047 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002208 seconds
	I0731 11:48:39.763295  853047 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 11:48:39.777287  853047 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 11:48:40.314527  853047 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 11:48:40.314710  853047 kubeadm.go:322] [mark-control-plane] Marking the node addons-708039 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 11:48:40.827066  853047 kubeadm.go:322] [bootstrap-token] Using token: hsrwad.m1ocpwkxtdloy3o4
	I0731 11:48:40.828826  853047 out.go:204]   - Configuring RBAC rules ...
	I0731 11:48:40.828957  853047 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 11:48:40.835985  853047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 11:48:40.844748  853047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 11:48:40.848734  853047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 11:48:40.854367  853047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 11:48:40.858746  853047 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 11:48:40.872828  853047 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 11:48:41.116226  853047 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 11:48:41.247548  853047 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 11:48:41.247565  853047 kubeadm.go:322] 
	I0731 11:48:41.247622  853047 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 11:48:41.247627  853047 kubeadm.go:322] 
	I0731 11:48:41.247699  853047 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 11:48:41.247703  853047 kubeadm.go:322] 
	I0731 11:48:41.247727  853047 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 11:48:41.247790  853047 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 11:48:41.247859  853047 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 11:48:41.247864  853047 kubeadm.go:322] 
	I0731 11:48:41.247915  853047 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 11:48:41.247919  853047 kubeadm.go:322] 
	I0731 11:48:41.247964  853047 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 11:48:41.247968  853047 kubeadm.go:322] 
	I0731 11:48:41.248016  853047 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 11:48:41.248086  853047 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 11:48:41.248169  853047 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 11:48:41.248176  853047 kubeadm.go:322] 
	I0731 11:48:41.248281  853047 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 11:48:41.248362  853047 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 11:48:41.248368  853047 kubeadm.go:322] 
	I0731 11:48:41.248452  853047 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hsrwad.m1ocpwkxtdloy3o4 \
	I0731 11:48:41.248592  853047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 \
	I0731 11:48:41.248624  853047 kubeadm.go:322] 	--control-plane 
	I0731 11:48:41.248635  853047 kubeadm.go:322] 
	I0731 11:48:41.248730  853047 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 11:48:41.248736  853047 kubeadm.go:322] 
	I0731 11:48:41.248825  853047 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hsrwad.m1ocpwkxtdloy3o4 \
	I0731 11:48:41.248933  853047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 
	I0731 11:48:41.255972  853047 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 11:48:41.256087  853047 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:48:41.256130  853047 cni.go:84] Creating CNI manager for ""
	I0731 11:48:41.256143  853047 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:48:41.258175  853047 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 11:48:41.259810  853047 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 11:48:41.264836  853047 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 11:48:41.264857  853047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 11:48:41.303761  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 11:48:42.284958  853047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 11:48:42.285092  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:42.285163  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=addons-708039 minikube.k8s.io/updated_at=2023_07_31T11_48_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:42.471621  853047 ops.go:34] apiserver oom_adj: -16
	I0731 11:48:42.471716  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:42.581373  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:43.179176  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:43.679942  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:44.179696  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:44.679400  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:45.179581  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:45.679340  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:46.179933  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:46.678964  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:47.179373  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:47.679579  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:48.179393  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:48.678963  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:49.179502  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:49.679523  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:50.178986  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:50.678977  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:51.179181  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:51.678958  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:52.179872  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:52.678963  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:53.178890  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:53.678911  853047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:48:53.826113  853047 kubeadm.go:1081] duration metric: took 11.541093919s to wait for elevateKubeSystemPrivileges.
	I0731 11:48:53.826136  853047 kubeadm.go:406] StartCluster complete in 30.050184545s
	I0731 11:48:53.826152  853047 settings.go:142] acquiring lock: {Name:mk829b6893936aa5483dce9aaeef4d670cd88116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:53.826680  853047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:48:53.827153  853047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/kubeconfig: {Name:mk6696558a0c97b92d2f11c98afd477ee2b6ad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:48:53.827703  853047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 11:48:53.828043  853047 config.go:182] Loaded profile config "addons-708039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:48:53.828187  853047 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0731 11:48:53.828271  853047 addons.go:69] Setting volumesnapshots=true in profile "addons-708039"
	I0731 11:48:53.828284  853047 addons.go:231] Setting addon volumesnapshots=true in "addons-708039"
	I0731 11:48:53.828336  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.828777  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.828951  853047 addons.go:69] Setting ingress=true in profile "addons-708039"
	I0731 11:48:53.828968  853047 addons.go:231] Setting addon ingress=true in "addons-708039"
	I0731 11:48:53.829014  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.829384  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.829443  853047 addons.go:69] Setting cloud-spanner=true in profile "addons-708039"
	I0731 11:48:53.829452  853047 addons.go:231] Setting addon cloud-spanner=true in "addons-708039"
	I0731 11:48:53.829479  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.829803  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.829860  853047 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-708039"
	I0731 11:48:53.829883  853047 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-708039"
	I0731 11:48:53.829906  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.830229  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.830280  853047 addons.go:69] Setting default-storageclass=true in profile "addons-708039"
	I0731 11:48:53.830290  853047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-708039"
	I0731 11:48:53.830487  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.830535  853047 addons.go:69] Setting gcp-auth=true in profile "addons-708039"
	I0731 11:48:53.830546  853047 mustload.go:65] Loading cluster: addons-708039
	I0731 11:48:53.830685  853047 config.go:182] Loaded profile config "addons-708039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:48:53.830868  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.830923  853047 addons.go:69] Setting metrics-server=true in profile "addons-708039"
	I0731 11:48:53.830932  853047 addons.go:231] Setting addon metrics-server=true in "addons-708039"
	I0731 11:48:53.830955  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.831275  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.831326  853047 addons.go:69] Setting ingress-dns=true in profile "addons-708039"
	I0731 11:48:53.831334  853047 addons.go:231] Setting addon ingress-dns=true in "addons-708039"
	I0731 11:48:53.831364  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.831686  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.831737  853047 addons.go:69] Setting inspektor-gadget=true in profile "addons-708039"
	I0731 11:48:53.831745  853047 addons.go:231] Setting addon inspektor-gadget=true in "addons-708039"
	I0731 11:48:53.831767  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.832087  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.832529  853047 addons.go:69] Setting registry=true in profile "addons-708039"
	I0731 11:48:53.832548  853047 addons.go:231] Setting addon registry=true in "addons-708039"
	I0731 11:48:53.832578  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.832966  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.856373  853047 addons.go:69] Setting storage-provisioner=true in profile "addons-708039"
	I0731 11:48:53.856398  853047 addons.go:231] Setting addon storage-provisioner=true in "addons-708039"
	I0731 11:48:53.856450  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:53.856880  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:53.878278  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 11:48:53.880779  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 11:48:53.880826  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 11:48:53.880965  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:53.926260  853047 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0731 11:48:53.935483  853047 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0731 11:48:53.935549  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0731 11:48:53.935629  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:53.960175  853047 out.go:177]   - Using image docker.io/registry:2.8.1
	I0731 11:48:53.961866  853047 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0731 11:48:53.963531  853047 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 11:48:53.963548  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0731 11:48:53.963616  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:53.979257  853047 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0731 11:48:54.020445  853047 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 11:48:54.020473  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 11:48:54.020543  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.035917  853047 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0731 11:48:54.037904  853047 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 11:48:54.044253  853047 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 11:48:54.050994  853047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:48:54.079231  853047 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:48:54.079253  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 11:48:54.079315  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.081928  853047 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 11:48:54.081993  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0731 11:48:54.082084  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.097636  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.118337  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:54.124157  853047 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0731 11:48:54.131376  853047 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 11:48:54.131403  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 11:48:54.131473  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.131935  853047 addons.go:231] Setting addon default-storageclass=true in "addons-708039"
	I0731 11:48:54.131968  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:48:54.132502  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:48:54.153631  853047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 11:48:54.169684  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 11:48:54.171617  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 11:48:54.175999  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 11:48:54.183673  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 11:48:54.185415  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 11:48:54.191557  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 11:48:54.188659  853047 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-708039" context rescaled to 1 replicas
	I0731 11:48:54.207668  853047 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0731 11:48:54.197894  853047 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:48:54.216234  853047 out.go:177] * Verifying Kubernetes components...
	I0731 11:48:54.211953  853047 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 11:48:54.211962  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 11:48:54.222375  853047 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 11:48:54.220916  853047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:48:54.220928  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 11:48:54.230378  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 11:48:54.230407  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 11:48:54.230467  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.238103  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.248233  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.268277  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.276347  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.312639  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.323495  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.331540  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.373386  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.379189  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.386918  853047 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 11:48:54.386944  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 11:48:54.387003  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:48:54.424281  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:48:54.521947  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 11:48:54.790466  853047 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 11:48:54.790490  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 11:48:54.797351  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 11:48:54.827458  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:48:54.865399  853047 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 11:48:54.865462  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 11:48:54.868850  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 11:48:54.878077  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 11:48:54.886261  853047 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 11:48:54.886324  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 11:48:54.898686  853047 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 11:48:54.898746  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 11:48:54.901110  853047 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 11:48:54.901167  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 11:48:54.969023  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 11:48:54.969089  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 11:48:54.985746  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 11:48:55.086934  853047 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 11:48:55.087016  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 11:48:55.103014  853047 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 11:48:55.103094  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 11:48:55.128538  853047 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 11:48:55.128616  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 11:48:55.174666  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 11:48:55.174753  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 11:48:55.302139  853047 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 11:48:55.302209  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 11:48:55.346234  853047 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 11:48:55.346298  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 11:48:55.351616  853047 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 11:48:55.351641  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 11:48:55.386502  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 11:48:55.386526  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 11:48:55.497104  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 11:48:55.515844  853047 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 11:48:55.515867  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 11:48:55.516175  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 11:48:55.516215  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 11:48:55.529826  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 11:48:55.529853  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 11:48:55.628544  853047 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 11:48:55.628572  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 11:48:55.647262  853047 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 11:48:55.647286  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 11:48:55.667065  853047 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 11:48:55.667091  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 11:48:55.734953  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 11:48:55.734979  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 11:48:55.764447  853047 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 11:48:55.764476  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 11:48:55.791537  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 11:48:55.873012  853047 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 11:48:55.873035  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0731 11:48:55.876738  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 11:48:55.876761  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 11:48:55.993283  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 11:48:55.997240  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 11:48:55.997274  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 11:48:56.181557  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 11:48:56.181585  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 11:48:56.383842  853047 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 11:48:56.383866  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 11:48:56.582003  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 11:48:56.959843  853047 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.806171927s)
	I0731 11:48:56.959884  853047 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 11:48:56.959908  853047 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.721605901s)
	I0731 11:48:56.961020  853047 node_ready.go:35] waiting up to 6m0s for node "addons-708039" to be "Ready" ...
	I0731 11:48:57.996623  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.474599469s)
	I0731 11:48:58.994763  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:48:59.539008  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.741614498s)
	I0731 11:48:59.539602  853047 addons.go:467] Verifying addon ingress=true in "addons-708039"
	I0731 11:48:59.543263  853047 out.go:177] * Verifying ingress addon...
	I0731 11:48:59.539154  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.711654721s)
	I0731 11:48:59.539226  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.661085769s)
	I0731 11:48:59.539253  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.553429045s)
	I0731 11:48:59.539308  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.042178855s)
	I0731 11:48:59.539382  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.747816358s)
	I0731 11:48:59.539438  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.546113627s)
	I0731 11:48:59.539578  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.670266025s)
	I0731 11:48:59.546485  853047 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 11:48:59.543482  853047 addons.go:467] Verifying addon registry=true in "addons-708039"
	I0731 11:48:59.543528  853047 addons.go:467] Verifying addon metrics-server=true in "addons-708039"
	W0731 11:48:59.543580  853047 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 11:48:59.549858  853047 out.go:177] * Verifying registry addon...
	I0731 11:48:59.548686  853047 retry.go:31] will retry after 137.983503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 11:48:59.552607  853047 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 11:48:59.588239  853047 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 11:48:59.588309  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:48:59.614565  853047 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 11:48:59.614628  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:48:59.626613  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:48:59.628009  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:48:59.688867  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 11:49:00.000859  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.418785794s)
	I0731 11:49:00.000900  853047 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-708039"
	I0731 11:49:00.004208  853047 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 11:49:00.006990  853047 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 11:49:00.026997  853047 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 11:49:00.027033  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:00.076539  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:00.150801  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:00.151174  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:00.584619  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:00.648228  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:00.649586  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:01.082811  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:01.136810  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:01.137091  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:01.420418  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.731505769s)
	I0731 11:49:01.472641  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:01.582780  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:01.633287  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:01.637059  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:02.082328  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:02.136325  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:02.146465  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:02.250957  853047 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 11:49:02.251065  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:49:02.286889  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:49:02.451046  853047 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 11:49:02.480996  853047 addons.go:231] Setting addon gcp-auth=true in "addons-708039"
	I0731 11:49:02.481093  853047 host.go:66] Checking if "addons-708039" exists ...
	I0731 11:49:02.481628  853047 cli_runner.go:164] Run: docker container inspect addons-708039 --format={{.State.Status}}
	I0731 11:49:02.508276  853047 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 11:49:02.508338  853047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-708039
	I0731 11:49:02.535894  853047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35841 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/addons-708039/id_rsa Username:docker}
	I0731 11:49:02.583378  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:02.633360  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:02.634884  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:02.650039  853047 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0731 11:49:02.652216  853047 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 11:49:02.653886  853047 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 11:49:02.653912  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 11:49:02.711022  853047 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 11:49:02.711046  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 11:49:02.780903  853047 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 11:49:02.780935  853047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0731 11:49:02.861644  853047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 11:49:03.095029  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:03.133866  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:03.136840  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:03.472830  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:03.584593  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:03.634258  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:03.643813  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:04.039953  853047 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.178259978s)
	I0731 11:49:04.041488  853047 addons.go:467] Verifying addon gcp-auth=true in "addons-708039"
	I0731 11:49:04.045059  853047 out.go:177] * Verifying gcp-auth addon...
	I0731 11:49:04.048037  853047 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 11:49:04.062356  853047 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 11:49:04.062426  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:04.069608  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:04.092338  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:04.135267  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:04.137187  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:04.575400  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:04.583944  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:04.638129  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:04.638959  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:05.075118  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:05.085627  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:05.134904  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:05.136335  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:05.574421  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:05.582372  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:05.636191  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:05.637312  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:05.971745  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:06.074778  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:06.083222  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:06.135494  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:06.137179  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:06.573695  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:06.581760  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:06.632814  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:06.639762  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:07.074948  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:07.084833  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:07.134145  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:07.136631  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:07.573829  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:07.583262  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:07.634233  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:07.635649  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:07.972883  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:08.074198  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:08.084937  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:08.134295  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:08.135224  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:08.575875  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:08.584283  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:08.633045  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:08.636224  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:09.075396  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:09.083138  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:09.132446  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:09.134307  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:09.575062  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:09.582024  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:09.632728  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:09.635516  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:10.075012  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:10.091431  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:10.139350  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:10.149308  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:10.472091  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:10.579422  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:10.603507  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:10.645957  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:10.649016  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:11.074747  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:11.083234  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:11.132943  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:11.136598  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:11.573748  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:11.581131  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:11.634312  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:11.634968  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:12.073707  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:12.083390  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:12.131792  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:12.134464  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:12.573903  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:12.581718  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:12.631588  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:12.633482  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:12.972390  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:13.074312  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:13.081102  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:13.131747  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:13.133499  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:13.573444  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:13.581286  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:13.631151  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:13.633239  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:14.073410  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:14.081759  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:14.131515  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:14.133013  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:14.574268  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:14.582193  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:14.632772  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:14.633262  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:15.073932  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:15.082392  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:15.133438  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:15.134754  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:15.472235  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:15.573621  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:15.581562  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:15.630849  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:15.634022  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:16.074520  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:16.082027  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:16.132363  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:16.132807  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:16.573985  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:16.580986  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:16.634754  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:16.639962  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:17.073964  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:17.081358  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:17.132097  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:17.134408  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:17.573410  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:17.581167  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:17.631905  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:17.633616  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:17.971710  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:18.073455  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:18.080917  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:18.132554  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:18.133914  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:18.573440  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:18.582318  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:18.638203  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:18.638711  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:19.073652  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:19.081202  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:19.132859  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:19.134472  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:19.574193  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:19.582553  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:19.631480  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:19.633412  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:20.073680  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:20.081648  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:20.131945  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:20.135957  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:20.472049  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:20.573032  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:20.581521  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:20.632377  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:20.633121  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:21.073692  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:21.081835  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:21.132087  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:21.133002  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:21.573648  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:21.581940  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:21.631644  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:21.633692  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:22.073704  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:22.080921  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:22.134275  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:22.135006  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:22.574373  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:22.581527  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:22.633529  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:22.633593  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:22.981038  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:23.074149  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:23.081276  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:23.132439  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:23.133447  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:23.573700  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:23.581456  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:23.631256  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:23.633119  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:24.074462  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:24.081585  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:24.131472  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:24.135133  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:24.574349  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:24.584488  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:24.631891  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:24.633651  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:25.073759  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:25.082417  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:25.131537  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:25.134173  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:25.471521  853047 node_ready.go:58] node "addons-708039" has status "Ready":"False"
	I0731 11:49:25.573774  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:25.581494  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:25.632092  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:25.640336  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:26.073485  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:26.081081  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:26.132216  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:26.133167  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:26.573845  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:26.582185  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:26.630869  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:26.634319  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:27.113916  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:27.115861  853047 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 11:49:27.115892  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:27.155973  853047 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 11:49:27.156000  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:27.156948  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:27.481399  853047 node_ready.go:49] node "addons-708039" has status "Ready":"True"
	I0731 11:49:27.481424  853047 node_ready.go:38] duration metric: took 30.520377451s waiting for node "addons-708039" to be "Ready" ...
	I0731 11:49:27.481435  853047 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:49:27.498303  853047 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5hfz5" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:27.622702  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:27.623741  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:27.653735  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:27.657883  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:28.076500  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:28.089913  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:28.135455  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:28.136338  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:28.573460  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:28.582977  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:28.634120  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:28.634940  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:29.037038  853047 pod_ready.go:92] pod "coredns-5d78c9869d-5hfz5" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.037108  853047 pod_ready.go:81] duration metric: took 1.538766766s waiting for pod "coredns-5d78c9869d-5hfz5" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.037144  853047 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.050102  853047 pod_ready.go:92] pod "etcd-addons-708039" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.050226  853047 pod_ready.go:81] duration metric: took 13.062446ms waiting for pod "etcd-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.050256  853047 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.066921  853047 pod_ready.go:92] pod "kube-apiserver-addons-708039" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.066992  853047 pod_ready.go:81] duration metric: took 16.713449ms waiting for pod "kube-apiserver-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.067017  853047 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.099958  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:29.103256  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:29.109160  853047 pod_ready.go:92] pod "kube-controller-manager-addons-708039" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.109229  853047 pod_ready.go:81] duration metric: took 42.190558ms waiting for pod "kube-controller-manager-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.109258  853047 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bhdf5" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.136617  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:29.141405  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:29.472960  853047 pod_ready.go:92] pod "kube-proxy-bhdf5" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.473035  853047 pod_ready.go:81] duration metric: took 363.756108ms waiting for pod "kube-proxy-bhdf5" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.473070  853047 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.587942  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:29.595718  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:29.632078  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:29.635677  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:29.872768  853047 pod_ready.go:92] pod "kube-scheduler-addons-708039" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:29.872843  853047 pod_ready.go:81] duration metric: took 399.742403ms waiting for pod "kube-scheduler-addons-708039" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:29.872870  853047 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:30.090133  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:30.102850  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:30.153772  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:30.159370  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:30.573833  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:30.582397  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:30.632565  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:30.633624  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:31.073715  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:31.083635  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:31.133154  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:31.134075  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:31.576562  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:31.585935  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:31.631208  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:31.635397  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:32.073949  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:32.082976  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:32.133683  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:32.134869  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:32.180142  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:32.574407  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:32.584090  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:32.633557  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:32.635895  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:33.075022  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:33.086229  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:33.145782  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:33.151282  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:33.575236  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:33.590689  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:33.635381  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:33.636753  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:34.074647  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:34.085352  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:34.137400  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:34.143894  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:34.575357  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:34.583231  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:34.640630  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:34.641789  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:34.679869  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:35.078087  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:35.093605  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:35.134175  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:35.140383  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:35.574634  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:35.582858  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:35.635263  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:35.636544  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:36.074406  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:36.082777  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:36.132265  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:36.134543  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:36.574029  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:36.588582  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:36.631648  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:36.634159  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:36.680364  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:37.074134  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:37.082935  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:37.132382  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:37.132425  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:37.574919  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:37.583803  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:37.635465  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:37.646662  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:38.096882  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:38.116174  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:38.139947  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:38.141197  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:38.575648  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:38.586840  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:38.635521  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:38.637438  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:39.075410  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:39.086431  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:39.137751  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:39.140357  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:39.183778  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:39.574503  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:39.594174  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:39.637280  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:39.639375  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:40.073993  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:40.084188  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:40.134275  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:40.135721  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:40.574497  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:40.582897  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:40.637566  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:40.638803  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:41.074265  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:41.084304  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:41.132193  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:41.134745  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:41.189482  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:41.577694  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:41.585268  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:41.634467  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:41.638422  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:42.074755  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:42.092898  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:42.132424  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:42.135699  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:42.574008  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:42.585481  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:42.642429  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:42.644775  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:43.073922  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:43.083537  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:43.131750  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:43.134464  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:43.574422  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:43.582710  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:43.632102  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:43.634520  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:43.679507  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:44.076155  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:44.099688  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:44.140747  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:44.143464  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:44.622202  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:44.625578  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:44.646393  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:44.652595  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:45.089989  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:45.094148  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:45.149670  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:45.158785  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:45.573599  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:45.588289  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:45.634190  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:45.635593  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:45.680990  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:46.075478  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:46.085581  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:46.150313  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:46.169159  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:46.573656  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:46.583912  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:46.643644  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:46.644057  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:47.074493  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:47.085301  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:47.141894  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:47.142391  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:47.575806  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:47.586547  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:47.641945  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:47.643151  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:47.681658  853047 pod_ready.go:102] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"False"
	I0731 11:49:48.075504  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:48.098297  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:48.145318  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:48.147587  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:48.576249  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:48.585913  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:48.641715  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:48.645066  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:49.078510  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:49.085067  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:49.135139  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:49.138071  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:49.580183  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:49.592483  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:49.633012  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:49.636623  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:49.680649  853047 pod_ready.go:92] pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace has status "Ready":"True"
	I0731 11:49:49.680673  853047 pod_ready.go:81] duration metric: took 19.807782539s waiting for pod "metrics-server-844d8db974-pj6dz" in "kube-system" namespace to be "Ready" ...
	I0731 11:49:49.680694  853047 pod_ready.go:38] duration metric: took 22.19924628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:49:49.680709  853047 api_server.go:52] waiting for apiserver process to appear ...
	I0731 11:49:49.680774  853047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 11:49:49.696172  853047 api_server.go:72] duration metric: took 55.484363593s to wait for apiserver process to appear ...
	I0731 11:49:49.696195  853047 api_server.go:88] waiting for apiserver healthz status ...
	I0731 11:49:49.696213  853047 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 11:49:49.705852  853047 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 11:49:49.707126  853047 api_server.go:141] control plane version: v1.27.3
	I0731 11:49:49.707154  853047 api_server.go:131] duration metric: took 10.953001ms to wait for apiserver health ...
	I0731 11:49:49.707163  853047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 11:49:49.717170  853047 system_pods.go:59] 17 kube-system pods found
	I0731 11:49:49.717217  853047 system_pods.go:61] "coredns-5d78c9869d-5hfz5" [3e73ab58-aec9-492d-b821-7310e1c73b2d] Running
	I0731 11:49:49.717227  853047 system_pods.go:61] "csi-hostpath-attacher-0" [97b624d8-08e1-4eef-b432-4dc215aa73d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 11:49:49.717237  853047 system_pods.go:61] "csi-hostpath-resizer-0" [16a819e6-4dec-445d-b225-10bf87b02a9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 11:49:49.717246  853047 system_pods.go:61] "csi-hostpathplugin-95shj" [d65696ee-2988-4d7f-8302-a35db9e109d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 11:49:49.717257  853047 system_pods.go:61] "etcd-addons-708039" [1894d5b0-f1cc-41b5-8648-aa797323d25d] Running
	I0731 11:49:49.717267  853047 system_pods.go:61] "kindnet-lvkjp" [b7e6332f-85fd-42d3-8857-3babe64363a3] Running
	I0731 11:49:49.717272  853047 system_pods.go:61] "kube-apiserver-addons-708039" [2ac31be6-b6cd-489c-a63d-53f3b72a9007] Running
	I0731 11:49:49.717280  853047 system_pods.go:61] "kube-controller-manager-addons-708039" [cebc44a6-40c4-463c-aa72-e4ab2504c6e4] Running
	I0731 11:49:49.717287  853047 system_pods.go:61] "kube-ingress-dns-minikube" [cc53229c-b247-41d9-a2d0-b333e7a67085] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 11:49:49.717297  853047 system_pods.go:61] "kube-proxy-bhdf5" [2bf9296d-49cd-4de5-abb6-9d1ee02e1150] Running
	I0731 11:49:49.717302  853047 system_pods.go:61] "kube-scheduler-addons-708039" [be9eb695-bd76-4ddd-87e9-20e23a05eb26] Running
	I0731 11:49:49.717307  853047 system_pods.go:61] "metrics-server-844d8db974-pj6dz" [75434341-2f4e-41e2-966e-e15c3c9c5cee] Running
	I0731 11:49:49.717315  853047 system_pods.go:61] "registry-d4mkn" [9853603b-0beb-4b34-b0d9-acffe26828eb] Running
	I0731 11:49:49.717321  853047 system_pods.go:61] "registry-proxy-55qnm" [5c1d1fda-b8f3-4594-821f-583a2bd57c5e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 11:49:49.717331  853047 system_pods.go:61] "snapshot-controller-75bbb956b9-h6227" [67885729-3dc2-4c96-aeda-d4231ba53b29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 11:49:49.717339  853047 system_pods.go:61] "snapshot-controller-75bbb956b9-w4xps" [0804a70b-89ce-4887-aacf-5bbb3240b877] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 11:49:49.717348  853047 system_pods.go:61] "storage-provisioner" [c34b0667-f710-49fd-a4fa-d26cdffee208] Running
	I0731 11:49:49.717355  853047 system_pods.go:74] duration metric: took 10.187022ms to wait for pod list to return data ...
	I0731 11:49:49.717365  853047 default_sa.go:34] waiting for default service account to be created ...
	I0731 11:49:49.719844  853047 default_sa.go:45] found service account: "default"
	I0731 11:49:49.719868  853047 default_sa.go:55] duration metric: took 2.497015ms for default service account to be created ...
	I0731 11:49:49.719878  853047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 11:49:49.729668  853047 system_pods.go:86] 17 kube-system pods found
	I0731 11:49:49.729701  853047 system_pods.go:89] "coredns-5d78c9869d-5hfz5" [3e73ab58-aec9-492d-b821-7310e1c73b2d] Running
	I0731 11:49:49.729712  853047 system_pods.go:89] "csi-hostpath-attacher-0" [97b624d8-08e1-4eef-b432-4dc215aa73d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 11:49:49.729722  853047 system_pods.go:89] "csi-hostpath-resizer-0" [16a819e6-4dec-445d-b225-10bf87b02a9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 11:49:49.729730  853047 system_pods.go:89] "csi-hostpathplugin-95shj" [d65696ee-2988-4d7f-8302-a35db9e109d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 11:49:49.729736  853047 system_pods.go:89] "etcd-addons-708039" [1894d5b0-f1cc-41b5-8648-aa797323d25d] Running
	I0731 11:49:49.729742  853047 system_pods.go:89] "kindnet-lvkjp" [b7e6332f-85fd-42d3-8857-3babe64363a3] Running
	I0731 11:49:49.729752  853047 system_pods.go:89] "kube-apiserver-addons-708039" [2ac31be6-b6cd-489c-a63d-53f3b72a9007] Running
	I0731 11:49:49.729765  853047 system_pods.go:89] "kube-controller-manager-addons-708039" [cebc44a6-40c4-463c-aa72-e4ab2504c6e4] Running
	I0731 11:49:49.729773  853047 system_pods.go:89] "kube-ingress-dns-minikube" [cc53229c-b247-41d9-a2d0-b333e7a67085] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 11:49:49.729779  853047 system_pods.go:89] "kube-proxy-bhdf5" [2bf9296d-49cd-4de5-abb6-9d1ee02e1150] Running
	I0731 11:49:49.729784  853047 system_pods.go:89] "kube-scheduler-addons-708039" [be9eb695-bd76-4ddd-87e9-20e23a05eb26] Running
	I0731 11:49:49.729792  853047 system_pods.go:89] "metrics-server-844d8db974-pj6dz" [75434341-2f4e-41e2-966e-e15c3c9c5cee] Running
	I0731 11:49:49.729798  853047 system_pods.go:89] "registry-d4mkn" [9853603b-0beb-4b34-b0d9-acffe26828eb] Running
	I0731 11:49:49.729806  853047 system_pods.go:89] "registry-proxy-55qnm" [5c1d1fda-b8f3-4594-821f-583a2bd57c5e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 11:49:49.729816  853047 system_pods.go:89] "snapshot-controller-75bbb956b9-h6227" [67885729-3dc2-4c96-aeda-d4231ba53b29] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 11:49:49.729824  853047 system_pods.go:89] "snapshot-controller-75bbb956b9-w4xps" [0804a70b-89ce-4887-aacf-5bbb3240b877] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 11:49:49.729832  853047 system_pods.go:89] "storage-provisioner" [c34b0667-f710-49fd-a4fa-d26cdffee208] Running
	I0731 11:49:49.729839  853047 system_pods.go:126] duration metric: took 9.956615ms to wait for k8s-apps to be running ...
	I0731 11:49:49.729847  853047 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 11:49:49.729911  853047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:49:49.745180  853047 system_svc.go:56] duration metric: took 15.323278ms WaitForService to wait for kubelet.
	I0731 11:49:49.745208  853047 kubeadm.go:581] duration metric: took 55.533403911s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 11:49:49.745233  853047 node_conditions.go:102] verifying NodePressure condition ...
	I0731 11:49:49.748528  853047 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 11:49:49.748562  853047 node_conditions.go:123] node cpu capacity is 2
	I0731 11:49:49.748576  853047 node_conditions.go:105] duration metric: took 3.338159ms to run NodePressure ...
	I0731 11:49:49.748587  853047 start.go:228] waiting for startup goroutines ...
	I0731 11:49:50.074633  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:50.082958  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:50.131672  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:50.134108  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:50.573998  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:50.582424  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:50.635953  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:50.639622  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:51.074582  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:51.089670  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:51.135057  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:51.135637  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:51.575024  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:51.583343  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:51.632033  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:51.635651  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:52.084829  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:52.085390  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:52.136009  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:52.138742  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:52.576187  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:52.582389  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:52.633789  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:52.634749  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:53.074167  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:53.083316  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:53.134960  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:53.136074  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:53.573853  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:53.594277  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:53.635281  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:53.636451  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:54.074036  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:54.083399  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:54.135395  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:54.136372  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:54.575399  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:54.583594  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:54.632264  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:54.635709  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:55.074253  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:55.083634  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:55.132246  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:55.135831  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:55.573860  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:55.583704  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:55.634372  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:55.635397  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:56.074674  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:56.084727  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:56.134136  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:56.135643  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:56.574465  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:56.583782  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:56.637354  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:56.644245  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:57.086914  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:57.096012  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:57.136001  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:57.138561  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:57.575957  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:57.582844  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:57.634935  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:57.635749  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:58.074366  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:58.082720  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:58.133933  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:58.134653  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:58.575090  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:58.586119  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:58.635948  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:58.637626  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:59.074487  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:59.083324  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:59.132946  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:59.134254  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:49:59.573811  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:49:59.587827  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:49:59.644746  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:49:59.645374  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:00.093632  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:00.128852  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:00.180274  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:00.222055  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:00.577011  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:00.585556  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:00.636458  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:00.637509  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:01.073765  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:01.089785  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:01.133439  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:01.137796  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:01.574805  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:01.584606  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:01.635185  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:01.636712  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:02.074571  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:02.086827  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:02.137705  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:02.139700  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:02.574495  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:02.585620  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:02.633986  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:02.636435  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:03.073736  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:03.082607  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:03.131902  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:03.135289  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 11:50:03.573607  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:03.582483  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:03.632314  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:03.635338  853047 kapi.go:107] duration metric: took 1m4.082729378s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 11:50:04.081917  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:04.087975  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:04.133280  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:04.575397  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:04.584326  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:04.632881  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:05.074391  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:05.083469  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:05.132231  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:05.574435  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:05.586123  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:05.631147  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:06.074408  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:06.083299  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:06.132448  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:06.574686  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:06.582906  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:06.631971  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:07.073837  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 11:50:07.083771  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:07.132942  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:07.574164  853047 kapi.go:107] duration metric: took 1m3.526123696s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 11:50:07.575926  853047 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-708039 cluster.
	I0731 11:50:07.577456  853047 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 11:50:07.579314  853047 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 11:50:07.583019  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:07.631612  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:08.082749  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:08.131271  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:08.586326  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:08.632972  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:09.086962  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:09.134555  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:09.582981  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:09.633031  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:10.083781  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:10.132360  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:10.588220  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:10.633857  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:11.082860  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:11.132270  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:11.604227  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:11.635004  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:12.082876  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:12.132881  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:12.584812  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:12.633188  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:13.083557  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:13.132583  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:13.583421  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:13.652147  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:14.083698  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:14.131421  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:14.590395  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:14.634884  853047 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 11:50:15.088163  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:15.132398  853047 kapi.go:107] duration metric: took 1m15.585911026s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 11:50:15.582940  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:16.087671  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:16.582785  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:17.082292  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:17.585492  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:18.083023  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:18.583187  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:19.082326  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:19.584380  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:20.086943  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:20.583420  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:21.082979  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:21.583053  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:22.084342  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:22.582427  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:23.082597  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:23.582400  853047 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 11:50:24.083111  853047 kapi.go:107] duration metric: took 1m24.07612183s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 11:50:24.085263  853047 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0731 11:50:24.087048  853047 addons.go:502] enable addons completed in 1m30.258848601s: enabled=[cloud-spanner storage-provisioner ingress-dns inspektor-gadget default-storageclass metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0731 11:50:24.087126  853047 start.go:233] waiting for cluster config update ...
	I0731 11:50:24.087149  853047 start.go:242] writing updated cluster config ...
	I0731 11:50:24.087499  853047 ssh_runner.go:195] Run: rm -f paused
	I0731 11:50:24.249670  853047 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 11:50:24.252539  853047 out.go:177] * Done! kubectl is now configured to use "addons-708039" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.235781010Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-87fk7/hello-world-app" id=cda9821b-6d6e-45a8-ab8c-14422c39ada7 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.235884279Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.287650433Z" level=info msg="Removing container: e134d922534d0a2f4eafbd39c633ef8a22f48bcfeff6ace451d43f51850fe3a3" id=201884d9-370e-4bcb-b148-48e831ccc3c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.347938028Z" level=info msg="Removed container e134d922534d0a2f4eafbd39c633ef8a22f48bcfeff6ace451d43f51850fe3a3: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=201884d9-370e-4bcb-b148-48e831ccc3c0 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.383768879Z" level=info msg="Created container c297304297542e9b55c50efa672670952ebff1755830dfde00289197a74f5d7d: default/hello-world-app-65bdb79f98-87fk7/hello-world-app" id=cda9821b-6d6e-45a8-ab8c-14422c39ada7 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.384737704Z" level=info msg="Starting container: c297304297542e9b55c50efa672670952ebff1755830dfde00289197a74f5d7d" id=cc5fdd8f-b9fe-4a78-ba18-c03a674a86fa name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 11:53:18 addons-708039 conmon[7720]: conmon c297304297542e9b55c5 <ninfo>: container 7742 exited with status 1
	Jul 31 11:53:18 addons-708039 crio[891]: time="2023-07-31 11:53:18.407781481Z" level=info msg="Started container" PID=7742 containerID=c297304297542e9b55c50efa672670952ebff1755830dfde00289197a74f5d7d description=default/hello-world-app-65bdb79f98-87fk7/hello-world-app id=cc5fdd8f-b9fe-4a78-ba18-c03a674a86fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=77bdfafbc28f9d075ee72ab74fdf8d243b7c3d8b68c64b65842224b509f259b9
	Jul 31 11:53:19 addons-708039 crio[891]: time="2023-07-31 11:53:19.037847943Z" level=info msg="Stopping container: 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f (timeout: 1s)" id=e870861d-9d99-4f00-97fa-ce32a24abaec name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:53:19 addons-708039 crio[891]: time="2023-07-31 11:53:19.288440460Z" level=info msg="Removing container: 3890d611871a038e37cce416867eb978dbdbf23fc831ec3bb4db942bf4b58953" id=afa25f01-bcd1-469d-8b50-1ca7a2753a65 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:53:19 addons-708039 crio[891]: time="2023-07-31 11:53:19.317516541Z" level=info msg="Removed container 3890d611871a038e37cce416867eb978dbdbf23fc831ec3bb4db942bf4b58953: default/hello-world-app-65bdb79f98-87fk7/hello-world-app" id=afa25f01-bcd1-469d-8b50-1ca7a2753a65 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.051378047Z" level=warning msg="Stopping container 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=e870861d-9d99-4f00-97fa-ce32a24abaec name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:53:20 addons-708039 conmon[4845]: conmon 3234e9ca2d60bcea9b1d <ninfo>: container 4856 exited with status 137
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.209394429Z" level=info msg="Stopped container 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f: ingress-nginx/ingress-nginx-controller-7799c6795f-5xbj6/controller" id=e870861d-9d99-4f00-97fa-ce32a24abaec name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.209931099Z" level=info msg="Stopping pod sandbox: 56ad16b794f7f665f763f235b626f9ffa65febc418439997aaa21ba92a072780" id=00579892-7670-4045-abab-af654bed33f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.213560090Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-6FEQXGV4JIKFDCRK - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-QJLROXAWLCBVJJHJ - [0:0]\n-X KUBE-HP-6FEQXGV4JIKFDCRK\n-X KUBE-HP-QJLROXAWLCBVJJHJ\nCOMMIT\n"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.215160903Z" level=info msg="Closing host port tcp:80"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.215207951Z" level=info msg="Closing host port tcp:443"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.216835580Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.216863969Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.217031927Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-5xbj6 Namespace:ingress-nginx ID:56ad16b794f7f665f763f235b626f9ffa65febc418439997aaa21ba92a072780 UID:dccc70d3-3255-4a53-be7e-05763ce365f4 NetNS:/var/run/netns/a2faad2d-4eb2-4a9f-aab7-526ffe528607 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.217174393Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-5xbj6 from CNI network \"kindnet\" (type=ptp)"
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.237692043Z" level=info msg="Stopped pod sandbox: 56ad16b794f7f665f763f235b626f9ffa65febc418439997aaa21ba92a072780" id=00579892-7670-4045-abab-af654bed33f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.293023253Z" level=info msg="Removing container: 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f" id=03dc01c3-80c5-4282-b0c7-6ba7b5b55740 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:53:20 addons-708039 crio[891]: time="2023-07-31 11:53:20.312897715Z" level=info msg="Removed container 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f: ingress-nginx/ingress-nginx-controller-7799c6795f-5xbj6/controller" id=03dc01c3-80c5-4282-b0c7-6ba7b5b55740 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c297304297542       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                             9 seconds ago       Exited              hello-world-app           2                   77bdfafbc28f9       hello-world-app-65bdb79f98-87fk7
	b6d6bc2b0245f       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   2ded34fb945a5       headlamp-66f6498c69-bl2tk
	d6edfd62b1ddb       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   5ca60e9326fd6       nginx
	601bfea0f401d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   ecaec23263795       gcp-auth-58478865f7-gk6pj
	bacc3f6dc4885       8f2588812ab2947d53d2f99b11142e2be088330ec67837bb82801c0d3501af78                                                             3 minutes ago       Exited              patch                     2                   4b676ffe52497       ingress-nginx-admission-patch-g4dm4
	e72891247a314       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              create                    0                   456065815af6e       ingress-nginx-admission-create-z9m9j
	7a96193592dc0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             3 minutes ago       Running             storage-provisioner       0                   f0fbc506ab3e0       storage-provisioner
	cc69435cbbf82       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             3 minutes ago       Running             coredns                   0                   0f243b04c7287       coredns-5d78c9869d-5hfz5
	c3610f6ff422c       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             4 minutes ago       Running             kindnet-cni               0                   ab5d17aaa0c17       kindnet-lvkjp
	cf042d31b74f3       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                                             4 minutes ago       Running             kube-proxy                0                   904d4af841401       kube-proxy-bhdf5
	e02ebfd967bc5       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                             4 minutes ago       Running             etcd                      0                   c0e014833284e       etcd-addons-708039
	5efa8e89d73cb       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                                             4 minutes ago       Running             kube-apiserver            0                   0bf63347e89f8       kube-apiserver-addons-708039
	6dd3f6ea0d441       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                                             4 minutes ago       Running             kube-controller-manager   0                   3f245722ab5f5       kube-controller-manager-addons-708039
	db6d39e9c3578       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                                             4 minutes ago       Running             kube-scheduler            0                   cfa91e4e80bf1       kube-scheduler-addons-708039
	
	* 
	* ==> coredns [cc69435cbbf821de086a8e46e82766686caa42864f78d2b03b2aebda4ea69d9d] <==
	* [INFO] 10.244.0.16:34815 - 45240 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106584s
	[INFO] 10.244.0.16:45443 - 60963 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007442s
	[INFO] 10.244.0.16:45443 - 45311 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006967s
	[INFO] 10.244.0.16:45443 - 65461 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074699s
	[INFO] 10.244.0.16:45443 - 46605 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000898401s
	[INFO] 10.244.0.16:45443 - 49276 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000930794s
	[INFO] 10.244.0.16:45443 - 27526 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057821s
	[INFO] 10.244.0.16:44662 - 54370 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105337s
	[INFO] 10.244.0.16:50074 - 21411 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065402s
	[INFO] 10.244.0.16:50074 - 16376 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079819s
	[INFO] 10.244.0.16:44662 - 4657 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043356s
	[INFO] 10.244.0.16:50074 - 50176 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041994s
	[INFO] 10.244.0.16:44662 - 43720 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046852s
	[INFO] 10.244.0.16:44662 - 326 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045629s
	[INFO] 10.244.0.16:50074 - 30936 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043881s
	[INFO] 10.244.0.16:50074 - 7790 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042666s
	[INFO] 10.244.0.16:44662 - 20414 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039721s
	[INFO] 10.244.0.16:44662 - 62140 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004297s
	[INFO] 10.244.0.16:50074 - 17434 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036816s
	[INFO] 10.244.0.16:44662 - 20941 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002564798s
	[INFO] 10.244.0.16:50074 - 26994 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00273168s
	[INFO] 10.244.0.16:50074 - 58129 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000879931s
	[INFO] 10.244.0.16:44662 - 64255 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001116147s
	[INFO] 10.244.0.16:50074 - 52966 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066691s
	[INFO] 10.244.0.16:44662 - 12420 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049632s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-708039
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-708039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=addons-708039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T11_48_42_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-708039
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:48:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-708039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:53:17 +0000   Mon, 31 Jul 2023 11:48:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:53:17 +0000   Mon, 31 Jul 2023 11:48:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:53:17 +0000   Mon, 31 Jul 2023 11:48:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:53:17 +0000   Mon, 31 Jul 2023 11:49:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-708039
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 623e2974717e4ae8a4deedd7114634a8
	  System UUID:                e68c67de-b9b8-4d4e-9dd5-16b048ade3c8
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-87fk7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-58478865f7-gk6pj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  headlamp                    headlamp-66f6498c69-bl2tk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 coredns-5d78c9869d-5hfz5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m34s
	  kube-system                 etcd-addons-708039                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m46s
	  kube-system                 kindnet-lvkjp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m34s
	  kube-system                 kube-apiserver-addons-708039             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-controller-manager-addons-708039    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-bhdf5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-addons-708039             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node addons-708039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node addons-708039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x8 over 4m55s)  kubelet          Node addons-708039 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet          Node addons-708039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet          Node addons-708039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet          Node addons-708039 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m35s                  node-controller  Node addons-708039 event: Registered Node addons-708039 in Controller
	  Normal  NodeReady                4m1s                   kubelet          Node addons-708039 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001147] FS-Cache: O-key=[8] 'f7dfc90000000000'
	[  +0.000716] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000ef767523
	[  +0.001121] FS-Cache: N-key=[8] 'f7dfc90000000000'
	[  +0.002869] FS-Cache: Duplicate cookie detected
	[  +0.000796] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=00000000cda0340b
	[  +0.001055] FS-Cache: O-key=[8] 'f7dfc90000000000'
	[  +0.000740] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=0000000040ec07b0
	[  +0.001126] FS-Cache: N-key=[8] 'f7dfc90000000000'
	[  +2.165288] FS-Cache: Duplicate cookie detected
	[  +0.000760] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=00000000ca9d321b
	[  +0.001179] FS-Cache: O-key=[8] 'f6dfc90000000000'
	[  +0.000706] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000006d9d7fe3
	[  +0.001048] FS-Cache: N-key=[8] 'f6dfc90000000000'
	[  +0.280010] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001084] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=00000000f332cf5e
	[  +0.001044] FS-Cache: O-key=[8] 'fcdfc90000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000ef767523
	[  +0.001146] FS-Cache: N-key=[8] 'fcdfc90000000000'
	
	* 
	* ==> etcd [e02ebfd967bc50873b6a93e07c2fa2db6b4f1c1d1ec30e541a3120c385d20993] <==
	* {"level":"info","ts":"2023-07-31T11:48:56.005Z","caller":"traceutil/trace.go:171","msg":"trace[6661150] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"406.364842ms","start":"2023-07-31T11:48:55.599Z","end":"2023-07-31T11:48:56.005Z","steps":["trace[6661150] 'process raft request'  (duration: 381.919008ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-31T11:48:56.092Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-31T11:48:55.599Z","time spent":"492.74535ms","remote":"127.0.0.1:44266","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4081,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:395 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4032 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2023-07-31T11:48:56.039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"439.610848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-07-31T11:48:56.160Z","caller":"traceutil/trace.go:171","msg":"trace[624105958] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:400; }","duration":"560.854987ms","start":"2023-07-31T11:48:55.599Z","end":"2023-07-31T11:48:56.160Z","steps":["trace[624105958] 'agreement among raft nodes before linearized reading'  (duration: 400.479322ms)","trace[624105958] 'range keys from bolt db'  (duration: 39.101518ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T11:48:56.160Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-31T11:48:55.599Z","time spent":"560.905547ms","remote":"127.0.0.1:43952","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-07-31T11:48:56.159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"559.79818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-31T11:48:56.160Z","caller":"traceutil/trace.go:171","msg":"trace[877099483] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"560.841228ms","start":"2023-07-31T11:48:55.599Z","end":"2023-07-31T11:48:56.160Z","steps":["trace[877099483] 'agreement among raft nodes before linearized reading'  (duration: 559.688536ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T11:48:56.160Z","caller":"traceutil/trace.go:171","msg":"trace[1434896667] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"141.126082ms","start":"2023-07-31T11:48:56.019Z","end":"2023-07-31T11:48:56.160Z","steps":["trace[1434896667] 'process raft request'  (duration: 45.471688ms)","trace[1434896667] 'compare'  (duration: 79.396517ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T11:48:56.160Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-31T11:48:55.599Z","time spent":"560.877871ms","remote":"127.0.0.1:44298","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-07-31T11:48:56.161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-31T11:48:55.538Z","time spent":"526.762322ms","remote":"127.0.0.1:44004","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":176,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:360 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:124 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-public/default\" > >"}
	{"level":"info","ts":"2023-07-31T11:48:56.739Z","caller":"traceutil/trace.go:171","msg":"trace[1881281638] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"109.841601ms","start":"2023-07-31T11:48:56.629Z","end":"2023-07-31T11:48:56.739Z","steps":["trace[1881281638] 'process raft request'  (duration: 85.944942ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T11:48:56.739Z","caller":"traceutil/trace.go:171","msg":"trace[1564807696] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"110.163412ms","start":"2023-07-31T11:48:56.629Z","end":"2023-07-31T11:48:56.739Z","steps":["trace[1564807696] 'process raft request'  (duration: 83.568181ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-31T11:48:56.791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.201929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-31T11:48:56.791Z","caller":"traceutil/trace.go:171","msg":"trace[709194085] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:409; }","duration":"129.2752ms","start":"2023-07-31T11:48:56.661Z","end":"2023-07-31T11:48:56.791Z","steps":["trace[709194085] 'agreement among raft nodes before linearized reading'  (duration: 79.635581ms)","trace[709194085] 'range keys from in-memory index tree'  (duration: 49.550537ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T11:48:56.791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.464508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" ","response":"range_response_count:36 size:7511"}
	{"level":"info","ts":"2023-07-31T11:48:56.791Z","caller":"traceutil/trace.go:171","msg":"trace[1096180054] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/; range_end:/registry/serviceaccounts/kube-system0; response_count:36; response_revision:409; }","duration":"129.492027ms","start":"2023-07-31T11:48:56.661Z","end":"2023-07-31T11:48:56.791Z","steps":["trace[1096180054] 'agreement among raft nodes before linearized reading'  (duration: 79.584364ms)","trace[1096180054] 'range keys from in-memory index tree'  (duration: 49.742634ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-31T11:48:57.690Z","caller":"traceutil/trace.go:171","msg":"trace[1267998135] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"122.83002ms","start":"2023-07-31T11:48:57.568Z","end":"2023-07-31T11:48:57.690Z","steps":["trace[1267998135] 'process raft request'  (duration: 122.309924ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T11:48:57.722Z","caller":"traceutil/trace.go:171","msg":"trace[1739030658] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:444; }","duration":"100.067772ms","start":"2023-07-31T11:48:57.622Z","end":"2023-07-31T11:48:57.722Z","steps":["trace[1739030658] 'read index received'  (duration: 69.474143ms)","trace[1739030658] 'applied index is now lower than readState.Index'  (duration: 30.592989ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-31T11:48:57.722Z","caller":"traceutil/trace.go:171","msg":"trace[292122156] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"100.861261ms","start":"2023-07-31T11:48:57.621Z","end":"2023-07-31T11:48:57.722Z","steps":["trace[292122156] 'process raft request'  (duration: 100.403532ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-31T11:48:57.797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.515833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:400"}
	{"level":"info","ts":"2023-07-31T11:48:57.797Z","caller":"traceutil/trace.go:171","msg":"trace[769665796] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:435; }","duration":"200.661991ms","start":"2023-07-31T11:48:57.596Z","end":"2023-07-31T11:48:57.797Z","steps":["trace[769665796] 'agreement among raft nodes before linearized reading'  (duration: 169.251333ms)","trace[769665796] 'range keys from in-memory index tree'  (duration: 31.198212ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T11:48:57.797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.691646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-07-31T11:48:57.804Z","caller":"traceutil/trace.go:171","msg":"trace[930025630] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:435; }","duration":"259.721586ms","start":"2023-07-31T11:48:57.544Z","end":"2023-07-31T11:48:57.804Z","steps":["trace[930025630] 'agreement among raft nodes before linearized reading'  (duration: 221.033924ms)","trace[930025630] 'range keys from in-memory index tree'  (duration: 31.615868ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T11:48:57.803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.698592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-31T11:48:57.807Z","caller":"traceutil/trace.go:171","msg":"trace[1097658037] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:435; }","duration":"130.931914ms","start":"2023-07-31T11:48:57.676Z","end":"2023-07-31T11:48:57.807Z","steps":["trace[1097658037] 'agreement among raft nodes before linearized reading'  (duration: 89.587985ms)","trace[1097658037] 'range keys from in-memory index tree'  (duration: 38.09537ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [601bfea0f401dedbc3afc5f745795814bbd617e552cc2587e981133998f0ab2c] <==
	* 2023/07/31 11:50:07 GCP Auth Webhook started!
	2023/07/31 11:50:34 Ready to marshal response ...
	2023/07/31 11:50:34 Ready to write response ...
	2023/07/31 11:50:40 Ready to marshal response ...
	2023/07/31 11:50:40 Ready to write response ...
	2023/07/31 11:50:48 Ready to marshal response ...
	2023/07/31 11:50:48 Ready to write response ...
	2023/07/31 11:50:48 Ready to marshal response ...
	2023/07/31 11:50:48 Ready to write response ...
	2023/07/31 11:50:48 Ready to marshal response ...
	2023/07/31 11:50:48 Ready to write response ...
	2023/07/31 11:51:20 Ready to marshal response ...
	2023/07/31 11:51:20 Ready to write response ...
	2023/07/31 11:51:44 Ready to marshal response ...
	2023/07/31 11:51:44 Ready to write response ...
	2023/07/31 11:53:01 Ready to marshal response ...
	2023/07/31 11:53:01 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:53:27 up 19:35,  0 users,  load average: 0.96, 2.45, 3.51
	Linux addons-708039 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [c3610f6ff422cc05327368da790fc790be1989ef3bd42714171dc1fd2a429b0d] <==
	* I0731 11:51:26.778775       1 main.go:227] handling current node
	I0731 11:51:36.784910       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:51:36.784937       1 main.go:227] handling current node
	I0731 11:51:46.789514       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:51:46.789542       1 main.go:227] handling current node
	I0731 11:51:56.800339       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:51:56.800368       1 main.go:227] handling current node
	I0731 11:52:06.805819       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:06.805854       1 main.go:227] handling current node
	I0731 11:52:16.818913       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:16.818948       1 main.go:227] handling current node
	I0731 11:52:26.823108       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:26.823139       1 main.go:227] handling current node
	I0731 11:52:36.835625       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:36.835655       1 main.go:227] handling current node
	I0731 11:52:46.850531       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:46.850665       1 main.go:227] handling current node
	I0731 11:52:56.854625       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:52:56.854656       1 main.go:227] handling current node
	I0731 11:53:06.863894       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:53:06.863922       1 main.go:227] handling current node
	I0731 11:53:16.876409       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:53:16.876437       1 main.go:227] handling current node
	I0731 11:53:26.885019       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:53:26.885049       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5efa8e89d73cbfcfde216878ceb7de634ff2daaf44de77b5f9de841c9c308c0a] <==
	* I0731 11:50:50.293414       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0731 11:51:31.755755       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0731 11:51:50.276267       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0731 11:51:50.276311       1 handler_proxy.go:100] no RequestInfo found in the context
	E0731 11:51:50.276360       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 11:51:50.276401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 11:52:01.989338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:01.989406       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.003071       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.004186       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.035833       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.035912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.040396       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.040440       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.073588       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.073728       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.091825       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.091888       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 11:52:02.102253       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 11:52:02.102371       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 11:52:03.041295       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 11:52:03.102344       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 11:52:03.126972       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 11:53:02.124295       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.101.134.231]
	
	* 
	* ==> kube-controller-manager [6dd3f6ea0d4417c7440a90705a3232c21fec45982c1d929c5081a5a1626d7477] <==
	* I0731 11:52:23.105129       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 11:52:23.577758       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0731 11:52:23.577928       1 shared_informer.go:318] Caches are synced for garbage collector
	W0731 11:52:23.619166       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:23.619198       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:52:24.210989       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:24.211029       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:52:26.272433       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:26.272469       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:52:32.687661       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:32.687771       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:52:41.801397       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:41.801431       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:52:46.063218       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:52:46.063252       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 11:53:01.874348       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0731 11:53:01.906937       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-87fk7"
	W0731 11:53:11.043341       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:53:11.043379       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:53:13.417683       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:53:13.417721       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:53:14.426454       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:53:14.426486       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 11:53:19.004533       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0731 11:53:19.015346       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [cf042d31b74f320ae0aed8b638b811bfc0533e8ee75d6b9dce929ddb47e4d03f] <==
	* I0731 11:48:58.972754       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0731 11:48:58.973072       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0731 11:48:58.973144       1 server_others.go:554] "Using iptables proxy"
	I0731 11:48:59.056661       1 server_others.go:192] "Using iptables Proxier"
	I0731 11:48:59.056712       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 11:48:59.056721       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 11:48:59.056736       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 11:48:59.056804       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 11:48:59.057359       1 server.go:658] "Version info" version="v1.27.3"
	I0731 11:48:59.057383       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 11:48:59.060164       1 config.go:188] "Starting service config controller"
	I0731 11:48:59.060183       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 11:48:59.060260       1 config.go:97] "Starting endpoint slice config controller"
	I0731 11:48:59.060272       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 11:48:59.096228       1 config.go:315] "Starting node config controller"
	I0731 11:48:59.096259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 11:48:59.160811       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 11:48:59.160993       1 shared_informer.go:318] Caches are synced for service config
	I0731 11:48:59.199477       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [db6d39e9c3578e3889a634b230793453cbe8b5d150515e02a0e6d6bf996bd713] <==
	* W0731 11:48:37.956220       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:48:37.956276       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 11:48:37.956386       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 11:48:37.956435       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 11:48:37.956531       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:48:37.956594       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 11:48:37.956724       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:48:37.956931       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 11:48:37.956792       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:48:37.957477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 11:48:37.957549       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 11:48:37.957632       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 11:48:37.956903       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:48:37.957725       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 11:48:38.825983       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:48:38.826023       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 11:48:38.830723       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:48:38.830816       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 11:48:38.917757       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:48:38.917876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 11:48:38.969987       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:48:38.970099       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 11:48:39.055558       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:48:39.055674       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 11:48:41.824900       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 11:53:19 addons-708039 kubelet[1363]: I0731 11:53:19.234309    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=43505d32-5612-4f39-b8ff-96b374aace23 path="/var/lib/kubelet/pods/43505d32-5612-4f39-b8ff-96b374aace23/volumes"
	Jul 31 11:53:19 addons-708039 kubelet[1363]: I0731 11:53:19.234775    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a4d2ea21-8086-4876-883f-7588c85bce5c path="/var/lib/kubelet/pods/a4d2ea21-8086-4876-883f-7588c85bce5c/volumes"
	Jul 31 11:53:19 addons-708039 kubelet[1363]: I0731 11:53:19.235163    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=cc53229c-b247-41d9-a2d0-b333e7a67085 path="/var/lib/kubelet/pods/cc53229c-b247-41d9-a2d0-b333e7a67085/volumes"
	Jul 31 11:53:19 addons-708039 kubelet[1363]: I0731 11:53:19.286020    1363 scope.go:115] "RemoveContainer" containerID="3890d611871a038e37cce416867eb978dbdbf23fc831ec3bb4db942bf4b58953"
	Jul 31 11:53:19 addons-708039 kubelet[1363]: I0731 11:53:19.286362    1363 scope.go:115] "RemoveContainer" containerID="c297304297542e9b55c50efa672670952ebff1755830dfde00289197a74f5d7d"
	Jul 31 11:53:19 addons-708039 kubelet[1363]: E0731 11:53:19.286654    1363 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-87fk7_default(660e5a85-beca-462a-bbde-cefe631c1857)\"" pod="default/hello-world-app-65bdb79f98-87fk7" podUID=660e5a85-beca-462a-bbde-cefe631c1857
	Jul 31 11:53:20 addons-708039 kubelet[1363]: W0731 11:53:20.173172    1363 container.go:586] Failed to update stats for container "/docker/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/crio-9b29040160697702756e7d9a2392d866da49cc80fd1aadf2a4c54a2cd15c45ac": unable to determine device info for dir: /var/lib/containers/storage/overlay/0a05123122f2d6418388a2339f4a89e748a41089f4d4d7d74edcce15a6aa6f19/diff: stat failed on /var/lib/containers/storage/overlay/0a05123122f2d6418388a2339f4a89e748a41089f4d4d7d74edcce15a6aa6f19/diff with error: no such file or directory, continuing to push stats
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.291713    1363 scope.go:115] "RemoveContainer" containerID="3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f"
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.313236    1363 scope.go:115] "RemoveContainer" containerID="3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f"
	Jul 31 11:53:20 addons-708039 kubelet[1363]: E0731 11:53:20.313615    1363 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f\": container with ID starting with 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f not found: ID does not exist" containerID="3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f"
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.313656    1363 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f} err="failed to get container status \"3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f\": rpc error: code = NotFound desc = could not find container \"3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f\": container with ID starting with 3234e9ca2d60bcea9b1d94b7bb7722d0953227d80e5764bf06b23b218224b47f not found: ID does not exist"
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.355386    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dccc70d3-3255-4a53-be7e-05763ce365f4-webhook-cert\") pod \"dccc70d3-3255-4a53-be7e-05763ce365f4\" (UID: \"dccc70d3-3255-4a53-be7e-05763ce365f4\") "
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.355476    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppjlj\" (UniqueName: \"kubernetes.io/projected/dccc70d3-3255-4a53-be7e-05763ce365f4-kube-api-access-ppjlj\") pod \"dccc70d3-3255-4a53-be7e-05763ce365f4\" (UID: \"dccc70d3-3255-4a53-be7e-05763ce365f4\") "
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.358409    1363 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccc70d3-3255-4a53-be7e-05763ce365f4-kube-api-access-ppjlj" (OuterVolumeSpecName: "kube-api-access-ppjlj") pod "dccc70d3-3255-4a53-be7e-05763ce365f4" (UID: "dccc70d3-3255-4a53-be7e-05763ce365f4"). InnerVolumeSpecName "kube-api-access-ppjlj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.362011    1363 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccc70d3-3255-4a53-be7e-05763ce365f4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dccc70d3-3255-4a53-be7e-05763ce365f4" (UID: "dccc70d3-3255-4a53-be7e-05763ce365f4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.456265    1363 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ppjlj\" (UniqueName: \"kubernetes.io/projected/dccc70d3-3255-4a53-be7e-05763ce365f4-kube-api-access-ppjlj\") on node \"addons-708039\" DevicePath \"\""
	Jul 31 11:53:20 addons-708039 kubelet[1363]: I0731 11:53:20.456306    1363 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dccc70d3-3255-4a53-be7e-05763ce365f4-webhook-cert\") on node \"addons-708039\" DevicePath \"\""
	Jul 31 11:53:20 addons-708039 kubelet[1363]: W0731 11:53:20.964547    1363 container.go:586] Failed to update stats for container "/crio-30f7352baaa7dd72fa7491e477ac9166d83d907266c1c853e3a102d68122e81b": unable to determine device info for dir: /var/lib/containers/storage/overlay/d85f607339c0f78d1584da6d267f890ae777172d7244f474f0bd4c00b189d03a/diff: stat failed on /var/lib/containers/storage/overlay/d85f607339c0f78d1584da6d267f890ae777172d7244f474f0bd4c00b189d03a/diff with error: no such file or directory, continuing to push stats
	Jul 31 11:53:21 addons-708039 kubelet[1363]: I0731 11:53:21.233950    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=dccc70d3-3255-4a53-be7e-05763ce365f4 path="/var/lib/kubelet/pods/dccc70d3-3255-4a53-be7e-05763ce365f4/volumes"
	Jul 31 11:53:22 addons-708039 kubelet[1363]: W0731 11:53:22.066935    1363 container.go:586] Failed to update stats for container "/docker/a6cdce0fdc3c000279e8b01a8266f6dbcc4b179ce28f55fb9d65102b8769e38b/crio-5de386e5daf4aae1abba2ab4c36d220397541291664df1081f9f9d4bd488f8b9": unable to determine device info for dir: /var/lib/containers/storage/overlay/e113829143ddcbc507b66f92729ecbbbb1c090781680085352d8b51feeb72437/diff: stat failed on /var/lib/containers/storage/overlay/e113829143ddcbc507b66f92729ecbbbb1c090781680085352d8b51feeb72437/diff with error: no such file or directory, continuing to push stats
	Jul 31 11:53:27 addons-708039 kubelet[1363]: E0731 11:53:27.517132    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d85f607339c0f78d1584da6d267f890ae777172d7244f474f0bd4c00b189d03a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d85f607339c0f78d1584da6d267f890ae777172d7244f474f0bd4c00b189d03a/diff: no such file or directory, extraDiskErr: <nil>
	Jul 31 11:53:27 addons-708039 kubelet[1363]: E0731 11:53:27.656877    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/24f1bc6a08fa9dc2ec21db2c132d869fc558ef747c39163fb0443766f466d7a2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/24f1bc6a08fa9dc2ec21db2c132d869fc558ef747c39163fb0443766f466d7a2/diff: no such file or directory, extraDiskErr: <nil>
	Jul 31 11:53:27 addons-708039 kubelet[1363]: E0731 11:53:27.790311    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/212ac9a2b30c7eea5eac5b37888183c873dc919abe1354bfa52d54400d9406fb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/212ac9a2b30c7eea5eac5b37888183c873dc919abe1354bfa52d54400d9406fb/diff: no such file or directory, extraDiskErr: <nil>
	Jul 31 11:53:27 addons-708039 kubelet[1363]: E0731 11:53:27.865264    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2a0badef58e7aef12f167d517213dfb24617452f81e019cca670853886721cc3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2a0badef58e7aef12f167d517213dfb24617452f81e019cca670853886721cc3/diff: no such file or directory, extraDiskErr: <nil>
	Jul 31 11:53:27 addons-708039 kubelet[1363]: E0731 11:53:27.882939    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/167c22df0c38f49e74ff70d33264c3535e5cd2c21e4e65af4743db9739dac381/diff" to get inode usage: stat /var/lib/containers/storage/overlay/167c22df0c38f49e74ff70d33264c3535e5cd2c21e4e65af4743db9739dac381/diff: no such file or directory, extraDiskErr: <nil>
	
	* 
	* ==> storage-provisioner [7a96193592dc00bd2e075f7575a92b96ac8990b2e22075e693a65b0de9c1c02e] <==
	* I0731 11:49:28.043695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 11:49:28.059335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 11:49:28.059449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 11:49:28.069628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 11:49:28.070226       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-708039_e3e819a5-4a84-4fdf-8049-5f9e98649c74!
	I0731 11:49:28.070990       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efbdd839-a533-48a3-9141-7f22476ff918", APIVersion:"v1", ResourceVersion:"843", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-708039_e3e819a5-4a84-4fdf-8049-5f9e98649c74 became leader
	I0731 11:49:28.171350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-708039_e3e819a5-4a84-4fdf-8049-5f9e98649c74!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-708039 -n addons-708039
helpers_test.go:261: (dbg) Run:  kubectl --context addons-708039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-604717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0731 12:00:51.985186  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-604717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.850199258s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-604717 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-604717 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2db28749-8ada-4043-90a8-56aa602c0337] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2db28749-8ada-4043-90a8-56aa602c0337] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.01456565s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0731 12:02:42.132843  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.138822  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.149181  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.169594  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.210077  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.290431  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.450753  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:42.771321  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:43.412251  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:44.693087  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:47.254033  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:02:52.374334  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:03:02.615483  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-604717 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.216356478s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-604717 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0731 12:03:23.095747  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021049557s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons disable ingress-dns --alsologtostderr -v=1: (1.305488341s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons disable ingress --alsologtostderr -v=1: (7.582850229s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-604717
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-604717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf",
	        "Created": "2023-07-31T11:59:06.193895975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 880319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:59:06.502591881Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf/hosts",
	        "LogPath": "/var/lib/docker/containers/3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf/3f7e8312a09a74dfb0f2c7f4bdb7e75688c15bc8d3760b7ba13f58bb3b7dd8bf-json.log",
	        "Name": "/ingress-addon-legacy-604717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-604717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-604717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0cf7f786fecf0c8a6dacc6f41e0a5add476b52a3a182b77855c2e3a69e71df45-init/diff:/var/lib/docker/overlay2/ea390dfb8f8baaae26b2c19880bf5069405274e04629daebd3f048abbe32d27b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cf7f786fecf0c8a6dacc6f41e0a5add476b52a3a182b77855c2e3a69e71df45/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cf7f786fecf0c8a6dacc6f41e0a5add476b52a3a182b77855c2e3a69e71df45/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cf7f786fecf0c8a6dacc6f41e0a5add476b52a3a182b77855c2e3a69e71df45/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-604717",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-604717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-604717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-604717",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-604717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97e9b3efb4526ea19bff3ac02224d598298c7370a2b10b62d89f76a34a52ba2b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35856"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35855"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35852"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35854"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35853"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/97e9b3efb452",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-604717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3f7e8312a09a",
	                        "ingress-addon-legacy-604717"
	                    ],
	                    "NetworkID": "d59ed3b8852c2c0d6fd4bfa9d0e7bf80d354d5b39c0206dc99f141aace998e01",
	                    "EndpointID": "783f254481f2e469225e5f1ae27c8a0048868d52db67b74203d488efc0cf0f62",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-604717 -n ingress-addon-legacy-604717
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-604717 logs -n 25: (1.444850465s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-063414                 | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-063414                 | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| service        | functional-063414 service            | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-063414 service            | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| start          | -p functional-063414                 | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | -p functional-063414                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-063414 ssh pgrep          | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-063414 image build -t     | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | localhost/my-image:functional-063414 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-063414 image ls           | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	| image          | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-063414                    | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-063414                 | functional-063414           | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 11:58 UTC |
	| start          | -p ingress-addon-legacy-604717       | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 11:58 UTC | 31 Jul 23 12:00 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-604717          | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:00 UTC | 31 Jul 23 12:00 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-604717          | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:00 UTC | 31 Jul 23 12:00 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-604717          | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:01 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-604717 ip       | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:03 UTC | 31 Jul 23 12:03 UTC |
	| addons         | ingress-addon-legacy-604717          | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:03 UTC | 31 Jul 23 12:03 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-604717          | ingress-addon-legacy-604717 | jenkins | v1.31.1 | 31 Jul 23 12:03 UTC | 31 Jul 23 12:03 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:58:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:58:49.736445  879858 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:58:49.736651  879858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:49.736677  879858 out.go:309] Setting ErrFile to fd 2...
	I0731 11:58:49.736695  879858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:49.737021  879858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 11:58:49.737555  879858 out.go:303] Setting JSON to false
	I0731 11:58:49.738566  879858 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70877,"bootTime":1690733853,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:58:49.738711  879858 start.go:138] virtualization:  
	I0731 11:58:49.740983  879858 out.go:177] * [ingress-addon-legacy-604717] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:58:49.742826  879858 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:58:49.744695  879858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:49.742949  879858 notify.go:220] Checking for updates...
	I0731 11:58:49.749864  879858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:58:49.751483  879858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:58:49.753065  879858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:58:49.754754  879858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:49.756396  879858 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:58:49.784688  879858 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:58:49.784788  879858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:58:49.872296  879858 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-31 11:58:49.862011322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:58:49.872428  879858 docker.go:294] overlay module found
	I0731 11:58:49.875363  879858 out.go:177] * Using the docker driver based on user configuration
	I0731 11:58:49.877180  879858 start.go:298] selected driver: docker
	I0731 11:58:49.877196  879858 start.go:898] validating driver "docker" against <nil>
	I0731 11:58:49.877217  879858 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:49.877827  879858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:58:49.939289  879858 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-31 11:58:49.929876759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:58:49.939447  879858 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 11:58:49.939651  879858 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:58:49.941690  879858 out.go:177] * Using Docker driver with root privileges
	I0731 11:58:49.943447  879858 cni.go:84] Creating CNI manager for ""
	I0731 11:58:49.943464  879858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:58:49.943473  879858 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:58:49.943494  879858 start_flags.go:319] config:
	{Name:ingress-addon-legacy-604717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-604717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:58:49.945300  879858 out.go:177] * Starting control plane node ingress-addon-legacy-604717 in cluster ingress-addon-legacy-604717
	I0731 11:58:49.946822  879858 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:58:49.948366  879858 out.go:177] * Pulling base image ...
	I0731 11:58:49.950006  879858 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:58:49.950087  879858 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:58:49.972042  879858 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:58:49.972065  879858 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 11:58:50.024553  879858 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0731 11:58:50.024583  879858 cache.go:57] Caching tarball of preloaded images
	I0731 11:58:50.024767  879858 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:58:50.026843  879858 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0731 11:58:50.029698  879858 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0731 11:58:50.156805  879858 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0731 11:58:58.318595  879858 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0731 11:58:58.318735  879858 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0731 11:58:59.468512  879858 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0731 11:58:59.468883  879858 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/config.json ...
	I0731 11:58:59.468918  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/config.json: {Name:mk4b2f3b0a9c6c47786f23087e92ccc966ceee99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:58:59.469105  879858 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:58:59.469141  879858 start.go:365] acquiring machines lock for ingress-addon-legacy-604717: {Name:mk1caa52aa69596b68f91c4558b46198f1dc73f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:59.469206  879858 start.go:369] acquired machines lock for "ingress-addon-legacy-604717" in 50.945µs
	I0731 11:58:59.469233  879858 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-604717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-604717 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:58:59.469303  879858 start.go:125] createHost starting for "" (driver="docker")
	I0731 11:58:59.471496  879858 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0731 11:58:59.471760  879858 start.go:159] libmachine.API.Create for "ingress-addon-legacy-604717" (driver="docker")
	I0731 11:58:59.471788  879858 client.go:168] LocalClient.Create starting
	I0731 11:58:59.471899  879858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 11:58:59.471935  879858 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:59.471955  879858 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:59.472020  879858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 11:58:59.472044  879858 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:59.472060  879858 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:59.472434  879858 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-604717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:58:59.489755  879858 cli_runner.go:211] docker network inspect ingress-addon-legacy-604717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:58:59.489851  879858 network_create.go:281] running [docker network inspect ingress-addon-legacy-604717] to gather additional debugging logs...
	I0731 11:58:59.489872  879858 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-604717
	W0731 11:58:59.507045  879858 cli_runner.go:211] docker network inspect ingress-addon-legacy-604717 returned with exit code 1
	I0731 11:58:59.507087  879858 network_create.go:284] error running [docker network inspect ingress-addon-legacy-604717]: docker network inspect ingress-addon-legacy-604717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-604717 not found
	I0731 11:58:59.507103  879858 network_create.go:286] output of [docker network inspect ingress-addon-legacy-604717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-604717 not found
	
	** /stderr **
	I0731 11:58:59.507168  879858 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:58:59.524686  879858 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40009a4970}
	I0731 11:58:59.524729  879858 network_create.go:123] attempt to create docker network ingress-addon-legacy-604717 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 11:58:59.524788  879858 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-604717 ingress-addon-legacy-604717
	I0731 11:58:59.597136  879858 network_create.go:107] docker network ingress-addon-legacy-604717 192.168.49.0/24 created
	I0731 11:58:59.597169  879858 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-604717" container
	I0731 11:58:59.597247  879858 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:58:59.616379  879858 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-604717 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-604717 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:58:59.634301  879858 oci.go:103] Successfully created a docker volume ingress-addon-legacy-604717
	I0731 11:58:59.634399  879858 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-604717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-604717 --entrypoint /usr/bin/test -v ingress-addon-legacy-604717:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 11:59:01.139950  879858 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-604717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-604717 --entrypoint /usr/bin/test -v ingress-addon-legacy-604717:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.505497289s)
	I0731 11:59:01.139999  879858 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-604717
	I0731 11:59:01.140020  879858 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:59:01.140040  879858 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:59:01.140211  879858 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-604717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:59:06.108572  879858 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-604717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.968310013s)
	I0731 11:59:06.108609  879858 kic.go:199] duration metric: took 4.968564 seconds to extract preloaded images to volume
	W0731 11:59:06.108764  879858 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:59:06.108876  879858 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:59:06.178194  879858 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-604717 --name ingress-addon-legacy-604717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-604717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-604717 --network ingress-addon-legacy-604717 --ip 192.168.49.2 --volume ingress-addon-legacy-604717:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:59:06.513511  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Running}}
	I0731 11:59:06.540256  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:06.567190  879858 cli_runner.go:164] Run: docker exec ingress-addon-legacy-604717 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:59:06.684290  879858 oci.go:144] the created container "ingress-addon-legacy-604717" has a running status.
	I0731 11:59:06.684322  879858 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa...
	I0731 11:59:07.011887  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 11:59:07.011984  879858 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:59:07.042941  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:07.067312  879858 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:59:07.067338  879858 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-604717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:59:07.168821  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:07.201448  879858 machine.go:88] provisioning docker machine ...
	I0731 11:59:07.201478  879858 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-604717"
	I0731 11:59:07.201549  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:07.231059  879858 main.go:141] libmachine: Using SSH client type: native
	I0731 11:59:07.231509  879858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35856 <nil> <nil>}
	I0731 11:59:07.231528  879858 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-604717 && echo "ingress-addon-legacy-604717" | sudo tee /etc/hostname
	I0731 11:59:07.232925  879858 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0731 11:59:10.382040  879858 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-604717
	
	I0731 11:59:10.382135  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:10.401502  879858 main.go:141] libmachine: Using SSH client type: native
	I0731 11:59:10.401938  879858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35856 <nil> <nil>}
	I0731 11:59:10.401962  879858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-604717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-604717/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-604717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:59:10.537628  879858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:59:10.537658  879858 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 11:59:10.537678  879858 ubuntu.go:177] setting up certificates
	I0731 11:59:10.537687  879858 provision.go:83] configureAuth start
	I0731 11:59:10.537748  879858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-604717
	I0731 11:59:10.557214  879858 provision.go:138] copyHostCerts
	I0731 11:59:10.557257  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 11:59:10.557290  879858 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 11:59:10.557301  879858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 11:59:10.557385  879858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 11:59:10.557468  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 11:59:10.557485  879858 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 11:59:10.557490  879858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 11:59:10.557514  879858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 11:59:10.557552  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 11:59:10.557567  879858 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 11:59:10.557571  879858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 11:59:10.557595  879858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 11:59:10.557641  879858 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-604717 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-604717]
	I0731 11:59:11.350998  879858 provision.go:172] copyRemoteCerts
	I0731 11:59:11.351096  879858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:59:11.351143  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:11.370126  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:11.467050  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 11:59:11.467114  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:59:11.495748  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 11:59:11.495838  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:59:11.524261  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 11:59:11.524323  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0731 11:59:11.552776  879858 provision.go:86] duration metric: configureAuth took 1.015073888s
	I0731 11:59:11.552813  879858 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:59:11.553041  879858 config.go:182] Loaded profile config "ingress-addon-legacy-604717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0731 11:59:11.553156  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:11.571742  879858 main.go:141] libmachine: Using SSH client type: native
	I0731 11:59:11.572263  879858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35856 <nil> <nil>}
	I0731 11:59:11.572289  879858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:59:11.855065  879858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:59:11.855085  879858 machine.go:91] provisioned docker machine in 4.653617468s
	I0731 11:59:11.855094  879858 client.go:171] LocalClient.Create took 12.383300781s
	I0731 11:59:11.855106  879858 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-604717" took 12.383346351s
	I0731 11:59:11.855114  879858 start.go:300] post-start starting for "ingress-addon-legacy-604717" (driver="docker")
	I0731 11:59:11.855123  879858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:59:11.855186  879858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:59:11.855237  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:11.874284  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:11.973214  879858 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:59:11.977888  879858 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:59:11.977924  879858 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:59:11.977936  879858 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:59:11.977942  879858 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 11:59:11.977951  879858 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 11:59:11.978021  879858 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 11:59:11.978107  879858 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 11:59:11.978119  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /etc/ssl/certs/8525502.pem
	I0731 11:59:11.978235  879858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:59:11.989077  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 11:59:12.031732  879858 start.go:303] post-start completed in 176.601783ms
	I0731 11:59:12.032183  879858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-604717
	I0731 11:59:12.054409  879858 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/config.json ...
	I0731 11:59:12.054720  879858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:59:12.054776  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:12.074402  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:12.170555  879858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:59:12.176577  879858 start.go:128] duration metric: createHost completed in 12.707258187s
	I0731 11:59:12.176601  879858 start.go:83] releasing machines lock for "ingress-addon-legacy-604717", held for 12.707380098s
	I0731 11:59:12.176676  879858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-604717
	I0731 11:59:12.194927  879858 ssh_runner.go:195] Run: cat /version.json
	I0731 11:59:12.194986  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:12.195053  879858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:59:12.195131  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:12.224284  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:12.233616  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:12.449152  879858 ssh_runner.go:195] Run: systemctl --version
	I0731 11:59:12.455139  879858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:59:12.607928  879858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:59:12.613795  879858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:59:12.638435  879858 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:59:12.638522  879858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:59:12.676318  879858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 11:59:12.676340  879858 start.go:466] detecting cgroup driver to use...
	I0731 11:59:12.676373  879858 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:59:12.676439  879858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:59:12.696378  879858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:59:12.711021  879858 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:59:12.711129  879858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:59:12.727936  879858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:59:12.744997  879858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 11:59:12.835684  879858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:59:12.943408  879858 docker.go:212] disabling docker service ...
	I0731 11:59:12.943524  879858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:59:12.966294  879858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:59:12.980470  879858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:59:13.082456  879858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:59:13.181262  879858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:59:13.195144  879858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:59:13.215819  879858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 11:59:13.215893  879858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:59:13.227808  879858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 11:59:13.227949  879858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:59:13.239887  879858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:59:13.251682  879858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:59:13.263722  879858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:59:13.275210  879858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:59:13.286430  879858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:59:13.296770  879858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:59:13.389894  879858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 11:59:13.511463  879858 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 11:59:13.511574  879858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 11:59:13.516438  879858 start.go:534] Will wait 60s for crictl version
	I0731 11:59:13.516551  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:13.520865  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:59:13.572663  879858 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 11:59:13.572796  879858 ssh_runner.go:195] Run: crio --version
	I0731 11:59:13.616891  879858 ssh_runner.go:195] Run: crio --version
	I0731 11:59:13.666209  879858 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0731 11:59:13.667813  879858 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-604717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:59:13.685228  879858 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 11:59:13.690001  879858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:59:13.703490  879858 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:59:13.703562  879858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:59:13.760982  879858 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0731 11:59:13.761055  879858 ssh_runner.go:195] Run: which lz4
	I0731 11:59:13.765621  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0731 11:59:13.765774  879858 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 11:59:13.770107  879858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 11:59:13.770144  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0731 11:59:16.001141  879858 crio.go:444] Took 2.235413 seconds to copy over tarball
	I0731 11:59:16.001301  879858 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 11:59:18.704963  879858 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.70361696s)
	I0731 11:59:18.704995  879858 crio.go:451] Took 2.703777 seconds to extract the tarball
	I0731 11:59:18.705006  879858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 11:59:18.951313  879858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:59:18.993283  879858 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0731 11:59:18.993308  879858 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 11:59:18.993345  879858 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:18.993559  879858 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:59:18.993644  879858 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:59:18.993720  879858 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:59:18.993793  879858 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:59:18.993863  879858 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 11:59:18.993925  879858 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:59:18.994022  879858 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0731 11:59:18.994910  879858 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:59:18.995344  879858 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:18.995621  879858 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0731 11:59:18.995765  879858 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:59:18.995963  879858 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 11:59:18.996260  879858 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:59:18.996457  879858 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:59:18.996945  879858 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0731 11:59:19.427744  879858 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.427935  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0731 11:59:19.442715  879858 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.443046  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0731 11:59:19.455673  879858 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.455893  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0731 11:59:19.456248  879858 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.456371  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0731 11:59:19.459121  879858 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.459308  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0731 11:59:19.467331  879858 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.467511  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:59:19.478692  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 11:59:19.528781  879858 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0731 11:59:19.528829  879858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:59:19.528882  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.542681  879858 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0731 11:59:19.542727  879858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:59:19.542777  879858 ssh_runner.go:195] Run: which crictl
	W0731 11:59:19.634423  879858 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 11:59:19.634589  879858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:19.662163  879858 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0731 11:59:19.662202  879858 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0731 11:59:19.662256  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.678988  879858 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0731 11:59:19.679036  879858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:59:19.679088  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.679163  879858 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0731 11:59:19.679184  879858 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:59:19.679211  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.679284  879858 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0731 11:59:19.679307  879858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:59:19.679329  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.679396  879858 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0731 11:59:19.679418  879858 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 11:59:19.679440  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.679512  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:59:19.679574  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:59:19.853048  879858 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 11:59:19.853097  879858 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:19.853156  879858 ssh_runner.go:195] Run: which crictl
	I0731 11:59:19.853234  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0731 11:59:19.853305  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0731 11:59:19.853384  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:59:19.853407  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0731 11:59:19.853472  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:59:19.853520  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0731 11:59:19.853487  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 11:59:19.992896  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0731 11:59:19.992983  879858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:19.993061  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0731 11:59:19.993096  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0731 11:59:19.993137  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0731 11:59:19.993174  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0731 11:59:20.060698  879858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 11:59:20.060816  879858 cache_images.go:92] LoadImages completed in 1.067494813s
	W0731 11:59:20.060912  879858 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0731 11:59:20.061023  879858 ssh_runner.go:195] Run: crio config
	I0731 11:59:20.123200  879858 cni.go:84] Creating CNI manager for ""
	I0731 11:59:20.123222  879858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:59:20.123234  879858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:59:20.123252  879858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-604717 NodeName:ingress-addon-legacy-604717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 11:59:20.123403  879858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-604717"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 11:59:20.123490  879858 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-604717 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-604717 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:59:20.123562  879858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0731 11:59:20.135922  879858 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:59:20.136058  879858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 11:59:20.147354  879858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0731 11:59:20.171950  879858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0731 11:59:20.194882  879858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 11:59:20.216859  879858 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:59:20.221516  879858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:59:20.235431  879858 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717 for IP: 192.168.49.2
	I0731 11:59:20.235465  879858 certs.go:190] acquiring lock for shared ca certs: {Name:mk762e840a818dea6b5e9edfaa8822eb28411d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.235641  879858 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key
	I0731 11:59:20.235693  879858 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key
	I0731 11:59:20.235743  879858 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key
	I0731 11:59:20.235758  879858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt with IP's: []
	I0731 11:59:20.409080  879858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt ...
	I0731 11:59:20.409112  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: {Name:mk82db735fa1eab39cb030fcf9cd5937d69b90cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.409310  879858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key ...
	I0731 11:59:20.409322  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key: {Name:mk5ad6bcf1fe5a64fd4c7035ecc8e3a8a0734c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.409413  879858 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key.dd3b5fb2
	I0731 11:59:20.409429  879858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 11:59:20.610912  879858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt.dd3b5fb2 ...
	I0731 11:59:20.610945  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt.dd3b5fb2: {Name:mk6554222952690bbdd10c6e568a0952447672ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.611140  879858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key.dd3b5fb2 ...
	I0731 11:59:20.611152  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key.dd3b5fb2: {Name:mk248430d5e65a6d118b647925096a8174e29d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.611237  879858 certs.go:337] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt
	I0731 11:59:20.611314  879858 certs.go:341] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key
	I0731 11:59:20.611371  879858 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.key
	I0731 11:59:20.611382  879858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.crt with IP's: []
	I0731 11:59:21.234463  879858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.crt ...
	I0731 11:59:21.234494  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.crt: {Name:mkbf4a3e99ce7db37b9d22b328318dba24d5802f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:21.234683  879858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.key ...
	I0731 11:59:21.234701  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.key: {Name:mk3bff14a9ff8dce84654aa88914d7a11d73a1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:21.234795  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 11:59:21.234816  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 11:59:21.234827  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 11:59:21.234842  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 11:59:21.234856  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 11:59:21.234874  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 11:59:21.234887  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 11:59:21.234905  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 11:59:21.234957  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem (1338 bytes)
	W0731 11:59:21.234999  879858 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550_empty.pem, impossibly tiny 0 bytes
	I0731 11:59:21.235008  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 11:59:21.235035  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:59:21.235062  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:59:21.235089  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem (1679 bytes)
	I0731 11:59:21.235140  879858 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 11:59:21.235174  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:59:21.235196  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem -> /usr/share/ca-certificates/852550.pem
	I0731 11:59:21.235210  879858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /usr/share/ca-certificates/8525502.pem
	I0731 11:59:21.235877  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 11:59:21.266341  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 11:59:21.295174  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 11:59:21.324022  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 11:59:21.353325  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:59:21.382559  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 11:59:21.412198  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:59:21.440778  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 11:59:21.469498  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:59:21.498348  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem --> /usr/share/ca-certificates/852550.pem (1338 bytes)
	I0731 11:59:21.527377  879858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /usr/share/ca-certificates/8525502.pem (1708 bytes)
	I0731 11:59:21.556808  879858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 11:59:21.578438  879858 ssh_runner.go:195] Run: openssl version
	I0731 11:59:21.585446  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8525502.pem && ln -fs /usr/share/ca-certificates/8525502.pem /etc/ssl/certs/8525502.pem"
	I0731 11:59:21.597115  879858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8525502.pem
	I0731 11:59:21.601835  879858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 11:59:21.601937  879858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8525502.pem
	I0731 11:59:21.610211  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8525502.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 11:59:21.621816  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:59:21.633410  879858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:59:21.637948  879858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:59:21.638014  879858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:59:21.646774  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:59:21.658799  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/852550.pem && ln -fs /usr/share/ca-certificates/852550.pem /etc/ssl/certs/852550.pem"
	I0731 11:59:21.670631  879858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/852550.pem
	I0731 11:59:21.675520  879858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 11:59:21.675623  879858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/852550.pem
	I0731 11:59:21.684266  879858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/852550.pem /etc/ssl/certs/51391683.0"
	I0731 11:59:21.695861  879858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 11:59:21.700476  879858 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:59:21.700527  879858 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-604717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-604717 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:59:21.700619  879858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 11:59:21.700674  879858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 11:59:21.744548  879858 cri.go:89] found id: ""
	I0731 11:59:21.744658  879858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 11:59:21.755519  879858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 11:59:21.766423  879858 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 11:59:21.766499  879858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 11:59:21.777333  879858 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:59:21.777381  879858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 11:59:21.833796  879858 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0731 11:59:21.834228  879858 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 11:59:21.893633  879858 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:59:21.893747  879858 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0731 11:59:21.893819  879858 kubeadm.go:322] OS: Linux
	I0731 11:59:21.893915  879858 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 11:59:21.894013  879858 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 11:59:21.894088  879858 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 11:59:21.894163  879858 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 11:59:21.894232  879858 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 11:59:21.894317  879858 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 11:59:21.992724  879858 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 11:59:21.992830  879858 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 11:59:21.992921  879858 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 11:59:22.247456  879858 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:59:22.249124  879858 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:59:22.249458  879858 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 11:59:22.344708  879858 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 11:59:22.347891  879858 out.go:204]   - Generating certificates and keys ...
	I0731 11:59:22.348078  879858 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 11:59:22.348226  879858 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 11:59:22.931755  879858 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 11:59:23.436728  879858 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 11:59:25.125580  879858 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 11:59:25.259391  879858 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 11:59:25.716072  879858 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 11:59:25.716275  879858 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-604717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:59:26.452732  879858 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 11:59:26.453159  879858 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-604717 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:59:27.046377  879858 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 11:59:27.457995  879858 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 11:59:28.202576  879858 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 11:59:28.202927  879858 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 11:59:28.530894  879858 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 11:59:29.672567  879858 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 11:59:30.011243  879858 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 11:59:31.405262  879858 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 11:59:31.406139  879858 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 11:59:31.408276  879858 out.go:204]   - Booting up control plane ...
	I0731 11:59:31.408407  879858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 11:59:31.416946  879858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 11:59:31.420930  879858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 11:59:31.421046  879858 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 11:59:31.421749  879858 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 11:59:42.924330  879858 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502520 seconds
	I0731 11:59:42.924443  879858 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 11:59:42.940053  879858 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 11:59:43.458271  879858 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 11:59:43.458418  879858 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-604717 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0731 11:59:43.967463  879858 kubeadm.go:322] [bootstrap-token] Using token: pnb4h6.y2cq5fkmz5nksufi
	I0731 11:59:43.969522  879858 out.go:204]   - Configuring RBAC rules ...
	I0731 11:59:43.969644  879858 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 11:59:43.974744  879858 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 11:59:43.989445  879858 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 11:59:43.993379  879858 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 11:59:44.007988  879858 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 11:59:44.018181  879858 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 11:59:44.032609  879858 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 11:59:44.339613  879858 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 11:59:44.449750  879858 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 11:59:44.452006  879858 kubeadm.go:322] 
	I0731 11:59:44.452076  879858 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 11:59:44.452083  879858 kubeadm.go:322] 
	I0731 11:59:44.452180  879858 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 11:59:44.452187  879858 kubeadm.go:322] 
	I0731 11:59:44.452211  879858 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 11:59:44.452272  879858 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 11:59:44.452322  879858 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 11:59:44.452331  879858 kubeadm.go:322] 
	I0731 11:59:44.452379  879858 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 11:59:44.452451  879858 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 11:59:44.452539  879858 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 11:59:44.452548  879858 kubeadm.go:322] 
	I0731 11:59:44.452627  879858 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 11:59:44.452702  879858 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 11:59:44.452711  879858 kubeadm.go:322] 
	I0731 11:59:44.452789  879858 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pnb4h6.y2cq5fkmz5nksufi \
	I0731 11:59:44.452894  879858 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 \
	I0731 11:59:44.452920  879858 kubeadm.go:322]     --control-plane 
	I0731 11:59:44.452924  879858 kubeadm.go:322] 
	I0731 11:59:44.453006  879858 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 11:59:44.453012  879858 kubeadm.go:322] 
	I0731 11:59:44.453089  879858 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pnb4h6.y2cq5fkmz5nksufi \
	I0731 11:59:44.453190  879858 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 
	I0731 11:59:44.456550  879858 kubeadm.go:322] W0731 11:59:21.832870    1228 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0731 11:59:44.456770  879858 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 11:59:44.456882  879858 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:59:44.457002  879858 kubeadm.go:322] W0731 11:59:31.416542    1228 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 11:59:44.457124  879858 kubeadm.go:322] W0731 11:59:31.418061    1228 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 11:59:44.457147  879858 cni.go:84] Creating CNI manager for ""
	I0731 11:59:44.457158  879858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:59:44.459138  879858 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 11:59:44.460722  879858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 11:59:44.466084  879858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0731 11:59:44.466104  879858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 11:59:44.489642  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 11:59:44.915663  879858 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 11:59:44.915816  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:44.915923  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=ingress-addon-legacy-604717 minikube.k8s.io/updated_at=2023_07_31T11_59_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:45.117984  879858 ops.go:34] apiserver oom_adj: -16
	I0731 11:59:45.118136  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:45.321749  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:45.948523  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:46.448668  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:46.947998  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:47.448562  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:47.948334  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:48.448413  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:48.948926  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:49.448225  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:49.948425  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:50.448399  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:50.947885  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:51.448456  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:51.948792  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:52.448257  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:52.948778  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:53.447957  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:53.948587  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:54.447903  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:54.948095  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:55.448560  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:55.948890  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:56.448486  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:56.947924  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:57.448517  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:57.947933  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:58.447942  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:58.947905  879858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:59:59.107734  879858 kubeadm.go:1081] duration metric: took 14.191979296s to wait for elevateKubeSystemPrivileges.
	I0731 11:59:59.107772  879858 kubeadm.go:406] StartCluster complete in 37.407240725s
	I0731 11:59:59.107790  879858 settings.go:142] acquiring lock: {Name:mk829b6893936aa5483dce9aaeef4d670cd88116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:59.107849  879858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:59:59.108589  879858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/kubeconfig: {Name:mk6696558a0c97b92d2f11c98afd477ee2b6ad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:59.109352  879858 kapi.go:59] client config for ingress-addon-legacy-604717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:59:59.110662  879858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 11:59:59.110892  879858 config.go:182] Loaded profile config "ingress-addon-legacy-604717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0731 11:59:59.110920  879858 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 11:59:59.110978  879858 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-604717"
	I0731 11:59:59.110990  879858 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-604717"
	I0731 11:59:59.111043  879858 host.go:66] Checking if "ingress-addon-legacy-604717" exists ...
	I0731 11:59:59.111458  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:59.112098  879858 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 11:59:59.112147  879858 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-604717"
	I0731 11:59:59.112162  879858 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-604717"
	I0731 11:59:59.112430  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:59.150230  879858 kapi.go:59] client config for ingress-addon-legacy-604717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:59:59.158834  879858 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-604717"
	I0731 11:59:59.158929  879858 host.go:66] Checking if "ingress-addon-legacy-604717" exists ...
	I0731 11:59:59.159477  879858 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-604717 --format={{.State.Status}}
	I0731 11:59:59.182839  879858 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:59:59.187288  879858 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:59:59.187318  879858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 11:59:59.187390  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:59.193576  879858 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 11:59:59.193597  879858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 11:59:59.193664  879858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-604717
	I0731 11:59:59.202811  879858 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-604717" context rescaled to 1 replicas
	I0731 11:59:59.202851  879858 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:59:59.204654  879858 out.go:177] * Verifying Kubernetes components...
	I0731 11:59:59.206316  879858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:59:59.235452  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:59.244020  879858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35856 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/ingress-addon-legacy-604717/id_rsa Username:docker}
	I0731 11:59:59.310306  879858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 11:59:59.311002  879858 kapi.go:59] client config for ingress-addon-legacy-604717: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:59:59.311282  879858 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-604717" to be "Ready" ...
	I0731 11:59:59.439309  879858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:59:59.459140  879858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 11:59:59.661449  879858 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 11:59:59.938806  879858 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 11:59:59.940619  879858 addons.go:502] enable addons completed in 829.672784ms: enabled=[storage-provisioner default-storageclass]
	I0731 12:00:01.323886  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:03.324264  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:05.823657  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:07.824254  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:10.324326  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:12.823872  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:15.324520  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:17.824338  879858 node_ready.go:58] node "ingress-addon-legacy-604717" has status "Ready":"False"
	I0731 12:00:18.323630  879858 node_ready.go:49] node "ingress-addon-legacy-604717" has status "Ready":"True"
	I0731 12:00:18.323661  879858 node_ready.go:38] duration metric: took 19.012357335s waiting for node "ingress-addon-legacy-604717" to be "Ready" ...
	I0731 12:00:18.323673  879858 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:00:18.330727  879858 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-tp787" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:20.338586  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-31 11:59:59 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0731 12:00:22.338803  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-31 11:59:59 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0731 12:00:24.341499  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace has status "Ready":"False"
	I0731 12:00:26.841026  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace has status "Ready":"False"
	I0731 12:00:28.841684  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace has status "Ready":"False"
	I0731 12:00:30.842002  879858 pod_ready.go:102] pod "coredns-66bff467f8-tp787" in "kube-system" namespace has status "Ready":"False"
	I0731 12:00:31.341073  879858 pod_ready.go:92] pod "coredns-66bff467f8-tp787" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.341097  879858 pod_ready.go:81] duration metric: took 13.010335517s waiting for pod "coredns-66bff467f8-tp787" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.341109  879858 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.346612  879858 pod_ready.go:92] pod "etcd-ingress-addon-legacy-604717" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.346636  879858 pod_ready.go:81] duration metric: took 5.519856ms waiting for pod "etcd-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.346650  879858 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.351630  879858 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-604717" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.351655  879858 pod_ready.go:81] duration metric: took 4.997036ms waiting for pod "kube-apiserver-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.351666  879858 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.356584  879858 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-604717" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.356612  879858 pod_ready.go:81] duration metric: took 4.936942ms waiting for pod "kube-controller-manager-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.356624  879858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gxjw5" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.361513  879858 pod_ready.go:92] pod "kube-proxy-gxjw5" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.361537  879858 pod_ready.go:81] duration metric: took 4.905918ms waiting for pod "kube-proxy-gxjw5" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.361548  879858 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.535727  879858 request.go:628] Waited for 174.11185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-604717
	I0731 12:00:31.736348  879858 request.go:628] Waited for 197.400545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-604717
	I0731 12:00:31.739002  879858 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-604717" in "kube-system" namespace has status "Ready":"True"
	I0731 12:00:31.739029  879858 pod_ready.go:81] duration metric: took 377.47248ms waiting for pod "kube-scheduler-ingress-addon-legacy-604717" in "kube-system" namespace to be "Ready" ...
	I0731 12:00:31.739042  879858 pod_ready.go:38] duration metric: took 13.415353045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:00:31.739058  879858 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:00:31.739134  879858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:00:31.752757  879858 api_server.go:72] duration metric: took 32.549872116s to wait for apiserver process to appear ...
	I0731 12:00:31.752779  879858 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:00:31.752796  879858 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 12:00:31.761845  879858 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 12:00:31.762762  879858 api_server.go:141] control plane version: v1.18.20
	I0731 12:00:31.762789  879858 api_server.go:131] duration metric: took 10.002803ms to wait for apiserver health ...
	I0731 12:00:31.762798  879858 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 12:00:31.936209  879858 request.go:628] Waited for 173.32773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:00:31.942146  879858 system_pods.go:59] 8 kube-system pods found
	I0731 12:00:31.942181  879858 system_pods.go:61] "coredns-66bff467f8-tp787" [30e4bd91-000c-4e7d-a5c2-12154a807836] Running
	I0731 12:00:31.942188  879858 system_pods.go:61] "etcd-ingress-addon-legacy-604717" [7d4a66ec-2368-4ecb-ad3a-de8fd0176d39] Running
	I0731 12:00:31.942196  879858 system_pods.go:61] "kindnet-fxf49" [924eb0e4-3602-4437-afff-1200ecd9daf9] Running
	I0731 12:00:31.942202  879858 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-604717" [65494361-f117-43fe-b23f-491baffd5448] Running
	I0731 12:00:31.942207  879858 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-604717" [616e6880-ba2d-41c1-a326-97f7aaeb4f81] Running
	I0731 12:00:31.942212  879858 system_pods.go:61] "kube-proxy-gxjw5" [5728c70c-5ad5-497b-b0d9-de4b7e668643] Running
	I0731 12:00:31.942219  879858 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-604717" [8a24dfa7-beb5-4593-b740-ea40f582e111] Running
	I0731 12:00:31.942229  879858 system_pods.go:61] "storage-provisioner" [c656ddbb-d27d-4e20-aa7b-04c88287d62f] Running
	I0731 12:00:31.942234  879858 system_pods.go:74] duration metric: took 179.43129ms to wait for pod list to return data ...
	I0731 12:00:31.942250  879858 default_sa.go:34] waiting for default service account to be created ...
	I0731 12:00:32.136673  879858 request.go:628] Waited for 194.331322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 12:00:32.139107  879858 default_sa.go:45] found service account: "default"
	I0731 12:00:32.139135  879858 default_sa.go:55] duration metric: took 196.878127ms for default service account to be created ...
	I0731 12:00:32.139146  879858 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 12:00:32.336569  879858 request.go:628] Waited for 197.354375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:00:32.342597  879858 system_pods.go:86] 8 kube-system pods found
	I0731 12:00:32.342631  879858 system_pods.go:89] "coredns-66bff467f8-tp787" [30e4bd91-000c-4e7d-a5c2-12154a807836] Running
	I0731 12:00:32.342644  879858 system_pods.go:89] "etcd-ingress-addon-legacy-604717" [7d4a66ec-2368-4ecb-ad3a-de8fd0176d39] Running
	I0731 12:00:32.342649  879858 system_pods.go:89] "kindnet-fxf49" [924eb0e4-3602-4437-afff-1200ecd9daf9] Running
	I0731 12:00:32.342655  879858 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-604717" [65494361-f117-43fe-b23f-491baffd5448] Running
	I0731 12:00:32.342660  879858 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-604717" [616e6880-ba2d-41c1-a326-97f7aaeb4f81] Running
	I0731 12:00:32.342665  879858 system_pods.go:89] "kube-proxy-gxjw5" [5728c70c-5ad5-497b-b0d9-de4b7e668643] Running
	I0731 12:00:32.342670  879858 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-604717" [8a24dfa7-beb5-4593-b740-ea40f582e111] Running
	I0731 12:00:32.342677  879858 system_pods.go:89] "storage-provisioner" [c656ddbb-d27d-4e20-aa7b-04c88287d62f] Running
	I0731 12:00:32.342696  879858 system_pods.go:126] duration metric: took 203.542119ms to wait for k8s-apps to be running ...
	I0731 12:00:32.342715  879858 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 12:00:32.342786  879858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:00:32.357034  879858 system_svc.go:56] duration metric: took 14.305318ms WaitForService to wait for kubelet.
	I0731 12:00:32.357064  879858 kubeadm.go:581] duration metric: took 33.154185642s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 12:00:32.357085  879858 node_conditions.go:102] verifying NodePressure condition ...
	I0731 12:00:32.536538  879858 request.go:628] Waited for 179.363319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0731 12:00:32.539562  879858 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:00:32.539597  879858 node_conditions.go:123] node cpu capacity is 2
	I0731 12:00:32.539609  879858 node_conditions.go:105] duration metric: took 182.519213ms to run NodePressure ...
	I0731 12:00:32.539621  879858 start.go:228] waiting for startup goroutines ...
	I0731 12:00:32.539628  879858 start.go:233] waiting for cluster config update ...
	I0731 12:00:32.539638  879858 start.go:242] writing updated cluster config ...
	I0731 12:00:32.539983  879858 ssh_runner.go:195] Run: rm -f paused
	I0731 12:00:32.605930  879858 start.go:596] kubectl: 1.27.4, cluster: 1.18.20 (minor skew: 9)
	I0731 12:00:32.608713  879858 out.go:177] 
	W0731 12:00:32.610752  879858 out.go:239] ! /usr/local/bin/kubectl is version 1.27.4, which may have incompatibilities with Kubernetes 1.18.20.
	I0731 12:00:32.612672  879858 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0731 12:00:32.614415  879858 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-604717" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.472400355Z" level=info msg="Stopped container cbc025eefc081b12c48557218ec163264af2c9ba69159f627f176eddc9b38226: ingress-nginx/ingress-nginx-controller-7fcf777cb7-ttdtn/controller" id=b542d4b8-ed4b-4070-86c7-5c9dcbd1010d name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.472817199Z" level=info msg="Stopped container cbc025eefc081b12c48557218ec163264af2c9ba69159f627f176eddc9b38226: ingress-nginx/ingress-nginx-controller-7fcf777cb7-ttdtn/controller" id=2685ece6-0784-47c3-88d1-024dc2a53149 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.473137354Z" level=info msg="Stopping pod sandbox: 508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea" id=cf5692b9-b8bd-4dd6-923a-1cea7dc4a164 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.473181309Z" level=info msg="Stopping pod sandbox: 508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea" id=c06d7dd1-2f9c-4b54-a528-7e6bc73ea9b0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.476586933Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-D44LMDY6KQCTI5IQ - [0:0]\n:KUBE-HP-R3ZF3XBVZED3PYU5 - [0:0]\n-X KUBE-HP-D44LMDY6KQCTI5IQ\n-X KUBE-HP-R3ZF3XBVZED3PYU5\nCOMMIT\n"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.478278538Z" level=info msg="Closing host port tcp:80"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.478334505Z" level=info msg="Closing host port tcp:443"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.479648445Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.479675201Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.479828169Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-ttdtn Namespace:ingress-nginx ID:508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea UID:7ad52c0e-4e8d-4420-812b-6b488c36e0d9 NetNS:/var/run/netns/a2de99d4-880a-4489-a69c-c9f7bb8af860 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.479966343Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-ttdtn from CNI network \"kindnet\" (type=ptp)"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.506015654Z" level=info msg="Stopped pod sandbox: 508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea" id=cf5692b9-b8bd-4dd6-923a-1cea7dc4a164 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.506169049Z" level=info msg="Stopped pod sandbox (already stopped): 508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea" id=c06d7dd1-2f9c-4b54-a528-7e6bc73ea9b0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.781676196Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=eeccc5d8-fc23-4505-8da2-9015289296cf name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.781887625Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=eeccc5d8-fc23-4505-8da2-9015289296cf name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.782753410Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7f44f9d3-cd1b-4c77-811d-1e57b184a91a name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.782931666Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7f44f9d3-cd1b-4c77-811d-1e57b184a91a name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.783563943Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-csxdf/hello-world-app" id=2c065395-ab4b-4237-bfc9-67afe5c35c83 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.783661239Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.874600684Z" level=info msg="Created container 7ccbba56aee3dbbfa3683d0727b52f14b7697f149b2833cffb8e4ee7def64129: default/hello-world-app-5f5d8b66bb-csxdf/hello-world-app" id=2c065395-ab4b-4237-bfc9-67afe5c35c83 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.875655351Z" level=info msg="Starting container: 7ccbba56aee3dbbfa3683d0727b52f14b7697f149b2833cffb8e4ee7def64129" id=03b57fe6-b7ce-4d17-930b-444723b620ad name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 31 12:03:37 ingress-addon-legacy-604717 conmon[3714]: conmon 7ccbba56aee3dbbfa368 <ninfo>: container 3726 exited with status 1
	Jul 31 12:03:37 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:37.892724329Z" level=info msg="Started container" PID=3726 containerID=7ccbba56aee3dbbfa3683d0727b52f14b7697f149b2833cffb8e4ee7def64129 description=default/hello-world-app-5f5d8b66bb-csxdf/hello-world-app id=03b57fe6-b7ce-4d17-930b-444723b620ad name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=feb3ea9844d7b1dcb12f4a8fda80424a55f57c3d1e1023f4cdba251825303452
	Jul 31 12:03:38 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:38.377031692Z" level=info msg="Removing container: 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065" id=e6810f3a-c36a-4b44-83c8-95001e810248 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 31 12:03:38 ingress-addon-legacy-604717 crio[895]: time="2023-07-31 12:03:38.402701181Z" level=info msg="Removed container 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065: default/hello-world-app-5f5d8b66bb-csxdf/hello-world-app" id=e6810f3a-c36a-4b44-83c8-95001e810248 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ccbba56aee3d       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   5 seconds ago       Exited              hello-world-app           2                   feb3ea9844d7b       hello-world-app-5f5d8b66bb-csxdf
	99ee8f7812da8       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   b379e382d80b5       nginx
	cbc025eefc081       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   508e2ca399738       ingress-nginx-controller-7fcf777cb7-ttdtn
	5e9bef3ebf221       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   29eb10a5448be       ingress-nginx-admission-patch-hl8rf
	f994b49d65f83       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   433176458bc1a       ingress-nginx-admission-create-gjqgq
	19b8b88deb28b       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   6e9718c33026c       storage-provisioner
	c505f0543c297       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   dd6f40561e1b2       coredns-66bff467f8-tp787
	2085b06c12bc1       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   794474e88520a       kindnet-fxf49
	81733abc287aa       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   e6391172392ad       kube-proxy-gxjw5
	09961df10608e       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   a0143bf9c19dd       etcd-ingress-addon-legacy-604717
	6321a7bfb4f98       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   c21613d9c4a79       kube-controller-manager-ingress-addon-legacy-604717
	6ca3f5e962845       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   c28be3a9fc2b6       kube-apiserver-ingress-addon-legacy-604717
	2c6d2ac3f4b9d       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   560b8d6e7f6d2       kube-scheduler-ingress-addon-legacy-604717
	
	* 
	* ==> coredns [c505f0543c297a9d3928fa24bfa4d1b50f103c558eb9ee98123b800b0c07ec4b] <==
	* [INFO] 10.244.0.5:43770 - 18277 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053842s
	[INFO] 10.244.0.5:43770 - 40446 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062793s
	[INFO] 10.244.0.5:59611 - 16609 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002033618s
	[INFO] 10.244.0.5:59611 - 47294 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109267s
	[INFO] 10.244.0.5:43770 - 47072 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001350609s
	[INFO] 10.244.0.5:43770 - 11306 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001138146s
	[INFO] 10.244.0.5:43770 - 25391 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060102s
	[INFO] 10.244.0.5:56247 - 36429 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084135s
	[INFO] 10.244.0.5:37135 - 50464 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074109s
	[INFO] 10.244.0.5:56247 - 40847 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119467s
	[INFO] 10.244.0.5:56247 - 229 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093924s
	[INFO] 10.244.0.5:37135 - 42289 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120245s
	[INFO] 10.244.0.5:56247 - 23271 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045998s
	[INFO] 10.244.0.5:37135 - 50799 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032017s
	[INFO] 10.244.0.5:56247 - 23144 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041378s
	[INFO] 10.244.0.5:37135 - 56693 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032221s
	[INFO] 10.244.0.5:56247 - 3085 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034946s
	[INFO] 10.244.0.5:37135 - 45512 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034346s
	[INFO] 10.244.0.5:37135 - 27447 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060283s
	[INFO] 10.244.0.5:37135 - 17854 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000882754s
	[INFO] 10.244.0.5:56247 - 39832 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001464873s
	[INFO] 10.244.0.5:37135 - 4704 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001125157s
	[INFO] 10.244.0.5:56247 - 57137 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00129858s
	[INFO] 10.244.0.5:37135 - 47232 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042265s
	[INFO] 10.244.0.5:56247 - 32542 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000028603s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-604717
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-604717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=ingress-addon-legacy-604717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T11_59_44_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:59:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-604717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 12:03:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 12:01:17 +0000   Mon, 31 Jul 2023 11:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 12:01:17 +0000   Mon, 31 Jul 2023 11:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 12:01:17 +0000   Mon, 31 Jul 2023 11:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 12:01:17 +0000   Mon, 31 Jul 2023 12:00:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-604717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 527d313845e84fb0be27e55c5925ad36
	  System UUID:                90fdde8b-6897-4555-a348-16f8f436bdfc
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-csxdf                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-tp787                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-604717                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-fxf49                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-604717             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-604717    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-gxjw5                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-604717             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m10s (x5 over 4m10s)  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x5 over 4m10s)  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x4 over 4m10s)  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s                  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s                  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s                  kubelet     Node ingress-addon-legacy-604717 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m26s                  kubelet     Node ingress-addon-legacy-604717 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001037] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000001a6bd468
	[  +0.001024] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +0.005951] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=0000000040ec07b0
	[  +0.001100] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000739] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000bcbfd487
	[  +0.001134] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +2.785467] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000006d9d7fe3
	[  +0.001098] FS-Cache: O-key=[8] 'ebe1c90000000000'
	[  +0.000685] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=0000000073926d86
	[  +0.001020] FS-Cache: N-key=[8] 'ebe1c90000000000'
	[  +0.282652] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000008660d6a7
	[  +0.001083] FS-Cache: O-key=[8] 'f4e1c90000000000'
	[  +0.000746] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000007f9efda2
	[  +0.001104] FS-Cache: N-key=[8] 'f4e1c90000000000'
	
	* 
	* ==> etcd [09961df10608e1ebd17cb749f0a5591e61bd62125ae7afb8971f9efebc4a3ef8] <==
	* raft2023/07/31 11:59:36 INFO: aec36adc501070cc became follower at term 0
	raft2023/07/31 11:59:36 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/31 11:59:36 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/31 11:59:36 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-31 11:59:36.368157 W | auth: simple token is not cryptographically signed
	2023-07-31 11:59:36.434372 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-31 11:59:36.697933 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/31 11:59:36 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-31 11:59:36.712279 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-07-31 11:59:36.823128 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-31 11:59:36.823404 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-31 11:59:36.823595 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/31 11:59:37 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/31 11:59:37 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/31 11:59:37 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/31 11:59:37 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/31 11:59:37 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-31 11:59:37.102219 I | etcdserver: published {Name:ingress-addon-legacy-604717 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-31 11:59:37.102420 I | embed: ready to serve client requests
	2023-07-31 11:59:37.104360 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-31 11:59:37.104599 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-31 11:59:37.105253 I | embed: ready to serve client requests
	2023-07-31 11:59:37.107644 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-31 11:59:37.131355 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-31 11:59:37.131594 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  12:03:43 up 19:46,  0 users,  load average: 1.59, 1.61, 2.59
	Linux ingress-addon-legacy-604717 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [2085b06c12bc1112eb3abf0bb05d97b5f751deaabf1de9c38f8d21faa1e566e5] <==
	* I0731 12:01:34.804533       1 main.go:227] handling current node
	I0731 12:01:44.807668       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:01:44.807702       1 main.go:227] handling current node
	I0731 12:01:54.818378       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:01:54.818404       1 main.go:227] handling current node
	I0731 12:02:04.822418       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:04.822446       1 main.go:227] handling current node
	I0731 12:02:14.834028       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:14.834134       1 main.go:227] handling current node
	I0731 12:02:24.837536       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:24.837565       1 main.go:227] handling current node
	I0731 12:02:34.843300       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:34.843333       1 main.go:227] handling current node
	I0731 12:02:44.854897       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:44.854926       1 main.go:227] handling current node
	I0731 12:02:54.864219       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:02:54.864246       1 main.go:227] handling current node
	I0731 12:03:04.867330       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:03:04.867357       1 main.go:227] handling current node
	I0731 12:03:14.870888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:03:14.870918       1 main.go:227] handling current node
	I0731 12:03:24.874118       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:03:24.874147       1 main.go:227] handling current node
	I0731 12:03:34.891697       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 12:03:34.892278       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6ca3f5e96284548b931e141304c8fde62fb08f8d26b057a812bd0516fc837e9f] <==
	* I0731 11:59:41.257259       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0731 11:59:41.159391       1 controller.go:81] Starting OpenAPI AggregationController
	I0731 11:59:41.348628       1 cache.go:39] Caches are synced for autoregister controller
	I0731 11:59:41.349039       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0731 11:59:41.353018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 11:59:41.353058       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0731 11:59:41.353077       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 11:59:42.158432       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0731 11:59:42.158471       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 11:59:42.165926       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0731 11:59:42.178305       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:59:42.178335       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0731 11:59:42.629174       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:59:42.671857       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0731 11:59:42.747483       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0731 11:59:42.748563       1 controller.go:609] quota admission added evaluator for: endpoints
	I0731 11:59:42.752518       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:59:43.588997       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0731 11:59:44.318574       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0731 11:59:44.395003       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0731 11:59:47.774593       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 11:59:59.633219       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0731 11:59:59.641583       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0731 12:00:33.495391       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0731 12:00:58.088816       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [6321a7bfb4f987acb536229e4f356ce6d63adb6060c0022e0bb16a3f15554f0a] <==
	* I0731 11:59:59.567416       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-604717", UID:"6bf68544-d544-4aa1-8733-d1eb5828893a", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-604717 event: Registered Node ingress-addon-legacy-604717 in Controller
	I0731 11:59:59.585614       1 range_allocator.go:373] Set node ingress-addon-legacy-604717 PodCIDR to [10.244.0.0/24]
	I0731 11:59:59.591677       1 shared_informer.go:230] Caches are synced for deployment 
	I0731 11:59:59.604404       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:59:59.626581       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:59:59.626851       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 11:59:59.660963       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:59:59.667085       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0731 11:59:59.667619       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:59:59.726681       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"27608caf-9ea5-4b40-a47e-a72602fbabd6", APIVersion:"apps/v1", ResourceVersion:"233", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-fxf49
	I0731 11:59:59.726783       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0ea67c66-f913-4376-a668-201f06cc85b4", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gxjw5
	I0731 11:59:59.726699       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fca24e0d-8a3c-41e5-b39f-b4859b58c01d", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0731 11:59:59.793620       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5255ebe2-7488-4a38-b761-35643925687d", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-tp787
	E0731 11:59:59.839063       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"0ea67c66-f913-4376-a668-201f06cc85b4", ResourceVersion:"220", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63826401584, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000375940), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000375960)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40003759a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001105480), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x40003759e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000375a40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000375a80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000af74a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000b6ad08), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003a9570), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f150)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000b6ad58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0731 12:00:19.604535       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0731 12:00:33.462639       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bf5467d9-b76a-4c45-a5cf-9c50b17cd717", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0731 12:00:33.482957       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"d5d48634-5ce6-40d1-914d-6438d2bab04b", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-ttdtn
	I0731 12:00:33.522248       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"bfba62e5-6c7c-4532-a6f8-6993f4469f31", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gjqgq
	I0731 12:00:33.584690       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"04cfcb7a-2332-41fd-a14a-40bb145f5b86", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hl8rf
	I0731 12:00:35.983295       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"bfba62e5-6c7c-4532-a6f8-6993f4469f31", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 12:00:36.981990       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"04cfcb7a-2332-41fd-a14a-40bb145f5b86", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 12:03:17.735490       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e69120b1-ed6e-4afa-883b-17683b37226e", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0731 12:03:17.751814       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d1513a4b-9511-4aab-bc64-4f40197fb901", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-csxdf
	E0731 12:03:40.093399       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-tz2cw" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [81733abc287aa04bfad4eed34246d839979e34ad5fd27436848a1e8ac45ec772] <==
	* W0731 12:00:00.935787       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0731 12:00:00.947812       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0731 12:00:00.947867       1 server_others.go:186] Using iptables Proxier.
	I0731 12:00:00.948279       1 server.go:583] Version: v1.18.20
	I0731 12:00:00.949457       1 config.go:133] Starting endpoints config controller
	I0731 12:00:00.949506       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0731 12:00:00.949567       1 config.go:315] Starting service config controller
	I0731 12:00:00.949577       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0731 12:00:01.058518       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0731 12:00:01.058523       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2c6d2ac3f4b9d6ce3d06290193ad7043b928dcfcbe1c3eb5aba421e1bf8038df] <==
	* I0731 11:59:41.374497       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 11:59:41.377859       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0731 11:59:41.379021       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:59:41.379095       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:59:41.379158       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0731 11:59:41.392639       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:59:41.392869       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:59:41.393119       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:59:41.393198       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:59:41.393264       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:59:41.393337       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 11:59:41.393400       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:59:41.400440       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:59:41.400583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:59:41.400654       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:59:41.400716       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:59:41.400774       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:59:42.325675       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:59:42.375595       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:59:42.428678       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:59:42.429783       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:59:42.462593       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:59:42.511770       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 11:59:45.479906       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0731 11:59:59.867467       1 factory.go:503] pod kube-system/coredns-66bff467f8-tp787 is already present in the backoff queue
	
	* 
	* ==> kubelet <==
	* Jul 31 12:03:21 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:21.346414    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c73f58422541449d48a85e1b5e76cc9e74f63765245388480c60fecd15bd8d23
	Jul 31 12:03:21 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:21.346652    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065
	Jul 31 12:03:21 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:21.346894    1656 pod_workers.go:191] Error syncing pod 8d3a4941-8583-4434-957a-b01b28bba793 ("hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"
	Jul 31 12:03:22 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:22.349295    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065
	Jul 31 12:03:22 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:22.349558    1656 pod_workers.go:191] Error syncing pod 8d3a4941-8583-4434-957a-b01b28bba793 ("hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"
	Jul 31 12:03:31 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:31.781820    1656 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 12:03:31 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:31.781872    1656 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 12:03:31 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:31.781917    1656 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 12:03:31 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:31.781954    1656 pod_workers.go:191] Error syncing pod 90841ed0-579a-43a9-a9ab-ea4fb0b092d9 ("kube-ingress-dns-minikube_kube-system(90841ed0-579a-43a9-a9ab-ea4fb0b092d9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 31 12:03:33 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:33.784616    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-nsfhx" (UniqueName: "kubernetes.io/secret/90841ed0-579a-43a9-a9ab-ea4fb0b092d9-minikube-ingress-dns-token-nsfhx") pod "90841ed0-579a-43a9-a9ab-ea4fb0b092d9" (UID: "90841ed0-579a-43a9-a9ab-ea4fb0b092d9")
	Jul 31 12:03:33 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:33.791431    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90841ed0-579a-43a9-a9ab-ea4fb0b092d9-minikube-ingress-dns-token-nsfhx" (OuterVolumeSpecName: "minikube-ingress-dns-token-nsfhx") pod "90841ed0-579a-43a9-a9ab-ea4fb0b092d9" (UID: "90841ed0-579a-43a9-a9ab-ea4fb0b092d9"). InnerVolumeSpecName "minikube-ingress-dns-token-nsfhx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 12:03:33 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:33.884939    1656 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-nsfhx" (UniqueName: "kubernetes.io/secret/90841ed0-579a-43a9-a9ab-ea4fb0b092d9-minikube-ingress-dns-token-nsfhx") on node "ingress-addon-legacy-604717" DevicePath ""
	Jul 31 12:03:35 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:35.277362    1656 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ttdtn.1776f233ffc39284", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ttdtn", UID:"7ad52c0e-4e8d-4420-812b-6b488c36e0d9", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-604717"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a06a5d03d6c84, ext:231017603879, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a06a5d03d6c84, ext:231017603879, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ttdtn.1776f233ffc39284" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 12:03:35 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:35.306599    1656 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ttdtn.1776f233ffc39284", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ttdtn", UID:"7ad52c0e-4e8d-4420-812b-6b488c36e0d9", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-604717"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a06a5d03d6c84, ext:231017603879, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a06a5d17d69fc, ext:231038574751, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ttdtn.1776f233ffc39284" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.780876    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.792893    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-webhook-cert") pod "7ad52c0e-4e8d-4420-812b-6b488c36e0d9" (UID: "7ad52c0e-4e8d-4420-812b-6b488c36e0d9")
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.792953    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-rd97t" (UniqueName: "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-ingress-nginx-token-rd97t") pod "7ad52c0e-4e8d-4420-812b-6b488c36e0d9" (UID: "7ad52c0e-4e8d-4420-812b-6b488c36e0d9")
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.803727    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7ad52c0e-4e8d-4420-812b-6b488c36e0d9" (UID: "7ad52c0e-4e8d-4420-812b-6b488c36e0d9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.805295    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-ingress-nginx-token-rd97t" (OuterVolumeSpecName: "ingress-nginx-token-rd97t") pod "7ad52c0e-4e8d-4420-812b-6b488c36e0d9" (UID: "7ad52c0e-4e8d-4420-812b-6b488c36e0d9"). InnerVolumeSpecName "ingress-nginx-token-rd97t". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.893294    1656 reconciler.go:319] Volume detached for volume "ingress-nginx-token-rd97t" (UniqueName: "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-ingress-nginx-token-rd97t") on node "ingress-addon-legacy-604717" DevicePath ""
	Jul 31 12:03:37 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:37.893323    1656 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7ad52c0e-4e8d-4420-812b-6b488c36e0d9-webhook-cert") on node "ingress-addon-legacy-604717" DevicePath ""
	Jul 31 12:03:38 ingress-addon-legacy-604717 kubelet[1656]: W0731 12:03:38.373366    1656 pod_container_deletor.go:77] Container "508e2ca3997385eb5e3c3e64a32382cb927b24c275ed539f06c1c57ba9e42eea" not found in pod's containers
	Jul 31 12:03:38 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:38.375158    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c0174b14c021fb019bd9ebefc7251c0188727e5ae03d1805a2cf6ada7b8d065
	Jul 31 12:03:38 ingress-addon-legacy-604717 kubelet[1656]: I0731 12:03:38.375419    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7ccbba56aee3dbbfa3683d0727b52f14b7697f149b2833cffb8e4ee7def64129
	Jul 31 12:03:38 ingress-addon-legacy-604717 kubelet[1656]: E0731 12:03:38.375689    1656 pod_workers.go:191] Error syncing pod 8d3a4941-8583-4434-957a-b01b28bba793 ("hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csxdf_default(8d3a4941-8583-4434-957a-b01b28bba793)"
	
	* 
	* ==> storage-provisioner [19b8b88deb28bce733b3f6b8efc331c47acb5a2c03536c16475b387f90d2476f] <==
	* I0731 12:00:25.027056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 12:00:25.049891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 12:00:25.049966       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 12:00:25.059629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 12:00:25.059866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-604717_723c6957-5d02-43d0-bfca-356addfaccc3!
	I0731 12:00:25.060993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd134936-3602-4d1b-8acc-c983feedeb83", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-604717_723c6957-5d02-43d0-bfca-356addfaccc3 became leader
	I0731 12:00:25.160801       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-604717_723c6957-5d02-43d0-bfca-356addfaccc3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-604717 -n ingress-addon-legacy-604717
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-604717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- sh -c "ping -c 1 192.168.58.1": exit status 1 (242.063364ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-bbjrl): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (251.563115ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-sssw6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-951087
helpers_test.go:235: (dbg) docker inspect multinode-951087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac",
	        "Created": "2023-07-31T12:10:15.976458725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 916650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T12:10:16.326158208Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/hosts",
	        "LogPath": "/var/lib/docker/containers/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac-json.log",
	        "Name": "/multinode-951087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-951087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-951087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/81d956fc96887608cf1b6f194763d25cc58d02eab2b8db27f19f98437aceb1ea-init/diff:/var/lib/docker/overlay2/ea390dfb8f8baaae26b2c19880bf5069405274e04629daebd3f048abbe32d27b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81d956fc96887608cf1b6f194763d25cc58d02eab2b8db27f19f98437aceb1ea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81d956fc96887608cf1b6f194763d25cc58d02eab2b8db27f19f98437aceb1ea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81d956fc96887608cf1b6f194763d25cc58d02eab2b8db27f19f98437aceb1ea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-951087",
	                "Source": "/var/lib/docker/volumes/multinode-951087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-951087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-951087",
	                "name.minikube.sigs.k8s.io": "multinode-951087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b127fd31421bda33a638a9bb0822038dfbfb2feed1804a165d45901128751b6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35916"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35915"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35914"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35913"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8b127fd31421",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-951087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6a8d3aff5e73",
	                        "multinode-951087"
	                    ],
	                    "NetworkID": "3cd2f3d254c994e4215f4901d6d6179d3f97d298cfa020729d0a2a1c62eddda5",
	                    "EndpointID": "41dcd9de66432f86b6d07574fb5f2caa04511839fe8b7f1965cedd1c7abd343c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-951087 -n multinode-951087
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-951087 logs -n 25: (1.679303246s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-610458                           | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:09 UTC | 31 Jul 23 12:09 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-610458 ssh -- ls                    | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:09 UTC | 31 Jul 23 12:09 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-608563                           | mount-start-1-608563 | jenkins | v1.31.1 | 31 Jul 23 12:09 UTC | 31 Jul 23 12:09 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-610458 ssh -- ls                    | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:09 UTC | 31 Jul 23 12:09 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-610458                           | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:09 UTC | 31 Jul 23 12:10 UTC |
	| start   | -p mount-start-2-610458                           | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:10 UTC | 31 Jul 23 12:10 UTC |
	| ssh     | mount-start-2-610458 ssh -- ls                    | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:10 UTC | 31 Jul 23 12:10 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-610458                           | mount-start-2-610458 | jenkins | v1.31.1 | 31 Jul 23 12:10 UTC | 31 Jul 23 12:10 UTC |
	| delete  | -p mount-start-1-608563                           | mount-start-1-608563 | jenkins | v1.31.1 | 31 Jul 23 12:10 UTC | 31 Jul 23 12:10 UTC |
	| start   | -p multinode-951087                               | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:10 UTC | 31 Jul 23 12:11 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- apply -f                   | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- rollout                    | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- get pods -o                | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- get pods -o                | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-bbjrl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-sssw6 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-bbjrl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-sssw6 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-bbjrl -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-sssw6 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- get pods -o                | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-bbjrl                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC |                     |
	|         | busybox-67b7f59bb-bbjrl -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC | 31 Jul 23 12:11 UTC |
	|         | busybox-67b7f59bb-sssw6                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-951087 -- exec                       | multinode-951087     | jenkins | v1.31.1 | 31 Jul 23 12:11 UTC |                     |
	|         | busybox-67b7f59bb-sssw6 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 12:10:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:10:10.582176  916191 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:10:10.582376  916191 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:10.582388  916191 out.go:309] Setting ErrFile to fd 2...
	I0731 12:10:10.582394  916191 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:10.582716  916191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:10:10.583214  916191 out.go:303] Setting JSON to false
	I0731 12:10:10.584313  916191 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":71558,"bootTime":1690733853,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:10:10.584386  916191 start.go:138] virtualization:  
	I0731 12:10:10.588381  916191 out.go:177] * [multinode-951087] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:10:10.590468  916191 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:10:10.590685  916191 notify.go:220] Checking for updates...
	I0731 12:10:10.593227  916191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:10:10.595111  916191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:10:10.596873  916191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:10:10.598581  916191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:10:10.600142  916191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:10:10.601950  916191 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:10:10.627013  916191 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:10:10.627124  916191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:10:10.719371  916191 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-31 12:10:10.709179488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:10:10.719486  916191 docker.go:294] overlay module found
	I0731 12:10:10.721695  916191 out.go:177] * Using the docker driver based on user configuration
	I0731 12:10:10.723532  916191 start.go:298] selected driver: docker
	I0731 12:10:10.723551  916191 start.go:898] validating driver "docker" against <nil>
	I0731 12:10:10.723567  916191 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:10:10.724221  916191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:10:10.797580  916191 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-31 12:10:10.786415772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:10:10.797743  916191 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 12:10:10.798031  916191 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:10:10.799867  916191 out.go:177] * Using Docker driver with root privileges
	I0731 12:10:10.801685  916191 cni.go:84] Creating CNI manager for ""
	I0731 12:10:10.801711  916191 cni.go:136] 0 nodes found, recommending kindnet
	I0731 12:10:10.801727  916191 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:10:10.801742  916191 start_flags.go:319] config:
	{Name:multinode-951087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:10:10.804919  916191 out.go:177] * Starting control plane node multinode-951087 in cluster multinode-951087
	I0731 12:10:10.806493  916191 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:10:10.808077  916191 out.go:177] * Pulling base image ...
	I0731 12:10:10.809927  916191 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:10:10.809954  916191 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 12:10:10.809981  916191 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 12:10:10.809992  916191 cache.go:57] Caching tarball of preloaded images
	I0731 12:10:10.810059  916191 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 12:10:10.810070  916191 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 12:10:10.810420  916191 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json ...
	I0731 12:10:10.810451  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json: {Name:mk5fb10d5962280b47b2d2f6fe0bd059e9c46bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:10.828426  916191 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 12:10:10.828452  916191 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 12:10:10.828471  916191 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:10:10.828516  916191 start.go:365] acquiring machines lock for multinode-951087: {Name:mk52a279674abb223a5af24374732ca01d191431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:10:10.828639  916191 start.go:369] acquired machines lock for "multinode-951087" in 100.168µs
	I0731 12:10:10.828675  916191 start.go:93] Provisioning new machine with config: &{Name:multinode-951087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 12:10:10.828815  916191 start.go:125] createHost starting for "" (driver="docker")
	I0731 12:10:10.832227  916191 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 12:10:10.832464  916191 start.go:159] libmachine.API.Create for "multinode-951087" (driver="docker")
	I0731 12:10:10.832489  916191 client.go:168] LocalClient.Create starting
	I0731 12:10:10.832579  916191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 12:10:10.832619  916191 main.go:141] libmachine: Decoding PEM data...
	I0731 12:10:10.832638  916191 main.go:141] libmachine: Parsing certificate...
	I0731 12:10:10.832709  916191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 12:10:10.832733  916191 main.go:141] libmachine: Decoding PEM data...
	I0731 12:10:10.832749  916191 main.go:141] libmachine: Parsing certificate...
	I0731 12:10:10.833108  916191 cli_runner.go:164] Run: docker network inspect multinode-951087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 12:10:10.850302  916191 cli_runner.go:211] docker network inspect multinode-951087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 12:10:10.850385  916191 network_create.go:281] running [docker network inspect multinode-951087] to gather additional debugging logs...
	I0731 12:10:10.850402  916191 cli_runner.go:164] Run: docker network inspect multinode-951087
	W0731 12:10:10.867641  916191 cli_runner.go:211] docker network inspect multinode-951087 returned with exit code 1
	I0731 12:10:10.867671  916191 network_create.go:284] error running [docker network inspect multinode-951087]: docker network inspect multinode-951087: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-951087 not found
	I0731 12:10:10.867683  916191 network_create.go:286] output of [docker network inspect multinode-951087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-951087 not found
	
	** /stderr **
	I0731 12:10:10.867749  916191 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:10:10.885175  916191 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-613e9d6d9aa3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:95:dc:f7:db} reservation:<nil>}
	I0731 12:10:10.885547  916191 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000be92a0}
	I0731 12:10:10.885571  916191 network_create.go:123] attempt to create docker network multinode-951087 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0731 12:10:10.885630  916191 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-951087 multinode-951087
	I0731 12:10:10.965344  916191 network_create.go:107] docker network multinode-951087 192.168.58.0/24 created
	I0731 12:10:10.965376  916191 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-951087" container
	I0731 12:10:10.965459  916191 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 12:10:10.981643  916191 cli_runner.go:164] Run: docker volume create multinode-951087 --label name.minikube.sigs.k8s.io=multinode-951087 --label created_by.minikube.sigs.k8s.io=true
	I0731 12:10:11.001467  916191 oci.go:103] Successfully created a docker volume multinode-951087
	I0731 12:10:11.001600  916191 cli_runner.go:164] Run: docker run --rm --name multinode-951087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951087 --entrypoint /usr/bin/test -v multinode-951087:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 12:10:11.572431  916191 oci.go:107] Successfully prepared a docker volume multinode-951087
	I0731 12:10:11.572465  916191 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:10:11.572486  916191 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 12:10:11.572572  916191 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951087:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 12:10:15.888729  916191 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951087:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31610948s)
	I0731 12:10:15.888760  916191 kic.go:199] duration metric: took 4.316271 seconds to extract preloaded images to volume
	W0731 12:10:15.888917  916191 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 12:10:15.889022  916191 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 12:10:15.956184  916191 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-951087 --name multinode-951087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-951087 --network multinode-951087 --ip 192.168.58.2 --volume multinode-951087:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 12:10:16.335374  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Running}}
	I0731 12:10:16.367949  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:16.402828  916191 cli_runner.go:164] Run: docker exec multinode-951087 stat /var/lib/dpkg/alternatives/iptables
	I0731 12:10:16.469563  916191 oci.go:144] the created container "multinode-951087" has a running status.
	I0731 12:10:16.469597  916191 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa...
	I0731 12:10:16.709413  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 12:10:16.709460  916191 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 12:10:16.737959  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:16.759348  916191 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 12:10:16.759367  916191 kic_runner.go:114] Args: [docker exec --privileged multinode-951087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 12:10:16.847426  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:16.874661  916191 machine.go:88] provisioning docker machine ...
	I0731 12:10:16.874696  916191 ubuntu.go:169] provisioning hostname "multinode-951087"
	I0731 12:10:16.874774  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:16.922006  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:10:16.922480  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35916 <nil> <nil>}
	I0731 12:10:16.922493  916191 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-951087 && echo "multinode-951087" | sudo tee /etc/hostname
	I0731 12:10:16.923227  916191 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0731 12:10:20.084905  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-951087
	
	I0731 12:10:20.085076  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:20.110642  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:10:20.111133  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35916 <nil> <nil>}
	I0731 12:10:20.111161  916191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-951087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-951087/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-951087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:10:20.249468  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:10:20.249492  916191 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:10:20.249512  916191 ubuntu.go:177] setting up certificates
	I0731 12:10:20.249519  916191 provision.go:83] configureAuth start
	I0731 12:10:20.249598  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087
	I0731 12:10:20.270775  916191 provision.go:138] copyHostCerts
	I0731 12:10:20.270815  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:10:20.270848  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:10:20.270855  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:10:20.270932  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:10:20.271020  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:10:20.271039  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:10:20.271044  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:10:20.271071  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:10:20.271155  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:10:20.271170  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:10:20.271174  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:10:20.271199  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:10:20.271254  916191 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.multinode-951087 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-951087]
	I0731 12:10:20.510701  916191 provision.go:172] copyRemoteCerts
	I0731 12:10:20.510796  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:10:20.510847  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:20.530427  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:20.626831  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 12:10:20.626892  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:10:20.656474  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 12:10:20.656545  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:10:20.686138  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 12:10:20.686227  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:10:20.715571  916191 provision.go:86] duration metric: configureAuth took 466.023857ms
	I0731 12:10:20.715601  916191 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:10:20.715821  916191 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:10:20.715936  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:20.735552  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:10:20.735988  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35916 <nil> <nil>}
	I0731 12:10:20.736010  916191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:10:20.978499  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:10:20.978587  916191 machine.go:91] provisioned docker machine in 4.103904552s
	I0731 12:10:20.978613  916191 client.go:171] LocalClient.Create took 10.146113182s
	I0731 12:10:20.978678  916191 start.go:167] duration metric: libmachine.API.Create for "multinode-951087" took 10.146214089s
	I0731 12:10:20.978706  916191 start.go:300] post-start starting for "multinode-951087" (driver="docker")
	I0731 12:10:20.978730  916191 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:10:20.978855  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:10:20.978922  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:21.000503  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:21.104637  916191 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:10:21.109353  916191 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0731 12:10:21.109374  916191 command_runner.go:130] > NAME="Ubuntu"
	I0731 12:10:21.109397  916191 command_runner.go:130] > VERSION_ID="22.04"
	I0731 12:10:21.109404  916191 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0731 12:10:21.109410  916191 command_runner.go:130] > VERSION_CODENAME=jammy
	I0731 12:10:21.109414  916191 command_runner.go:130] > ID=ubuntu
	I0731 12:10:21.109420  916191 command_runner.go:130] > ID_LIKE=debian
	I0731 12:10:21.109428  916191 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0731 12:10:21.109434  916191 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0731 12:10:21.109445  916191 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0731 12:10:21.109454  916191 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0731 12:10:21.109461  916191 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0731 12:10:21.109558  916191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:10:21.109595  916191 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:10:21.109608  916191 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:10:21.109618  916191 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 12:10:21.109630  916191 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:10:21.109702  916191 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:10:21.109803  916191 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:10:21.109815  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /etc/ssl/certs/8525502.pem
	I0731 12:10:21.109917  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:10:21.122040  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:10:21.153345  916191 start.go:303] post-start completed in 174.611093ms
	I0731 12:10:21.153743  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087
	I0731 12:10:21.173519  916191 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json ...
	I0731 12:10:21.173803  916191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:10:21.173853  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:21.191732  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:21.281968  916191 command_runner.go:130] > 16%!
	(MISSING)I0731 12:10:21.282408  916191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:10:21.288156  916191 command_runner.go:130] > 165G
	I0731 12:10:21.288186  916191 start.go:128] duration metric: createHost completed in 10.459352794s
	I0731 12:10:21.288195  916191 start.go:83] releasing machines lock for "multinode-951087", held for 10.459542356s
	I0731 12:10:21.288266  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087
	I0731 12:10:21.306227  916191 ssh_runner.go:195] Run: cat /version.json
	I0731 12:10:21.306285  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:21.306528  916191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:10:21.306585  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:21.333443  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:21.336094  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:21.424834  916191 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0731 12:10:21.425386  916191 ssh_runner.go:195] Run: systemctl --version
	I0731 12:10:21.579760  916191 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 12:10:21.583328  916191 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0731 12:10:21.583369  916191 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0731 12:10:21.583470  916191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:10:21.730538  916191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:10:21.736221  916191 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0731 12:10:21.736243  916191 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0731 12:10:21.736253  916191 command_runner.go:130] > Device: 3ah/58d	Inode: 5967833     Links: 1
	I0731 12:10:21.736260  916191 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:10:21.736268  916191 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0731 12:10:21.736274  916191 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0731 12:10:21.736280  916191 command_runner.go:130] > Change: 2023-07-31 11:47:51.445665001 +0000
	I0731 12:10:21.736286  916191 command_runner.go:130] >  Birth: 2023-07-31 11:47:51.445665001 +0000
	I0731 12:10:21.736577  916191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:10:21.761028  916191 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:10:21.761199  916191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:10:21.801006  916191 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0731 12:10:21.801050  916191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 12:10:21.801094  916191 start.go:466] detecting cgroup driver to use...
	I0731 12:10:21.801128  916191 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:10:21.801197  916191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:10:21.820802  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:10:21.834844  916191 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:10:21.834942  916191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:10:21.851742  916191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:10:21.869548  916191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 12:10:21.973025  916191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:10:22.083448  916191 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0731 12:10:22.083488  916191 docker.go:212] disabling docker service ...
	I0731 12:10:22.083584  916191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:10:22.106374  916191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:10:22.121231  916191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:10:22.136535  916191 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0731 12:10:22.227258  916191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:10:22.339195  916191 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0731 12:10:22.339274  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:10:22.354215  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:10:22.374103  916191 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 12:10:22.375745  916191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 12:10:22.375832  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:10:22.389535  916191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 12:10:22.389632  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:10:22.403225  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:10:22.415718  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:10:22.428449  916191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:10:22.440186  916191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:10:22.449986  916191 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 12:10:22.451365  916191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:10:22.462324  916191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:10:22.562254  916191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 12:10:22.683088  916191 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 12:10:22.683261  916191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 12:10:22.688515  916191 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 12:10:22.688539  916191 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 12:10:22.688548  916191 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0731 12:10:22.688573  916191 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:10:22.688586  916191 command_runner.go:130] > Access: 2023-07-31 12:10:22.664473265 +0000
	I0731 12:10:22.688595  916191 command_runner.go:130] > Modify: 2023-07-31 12:10:22.664473265 +0000
	I0731 12:10:22.688611  916191 command_runner.go:130] > Change: 2023-07-31 12:10:22.664473265 +0000
	I0731 12:10:22.688621  916191 command_runner.go:130] >  Birth: -
	I0731 12:10:22.688883  916191 start.go:534] Will wait 60s for crictl version
	I0731 12:10:22.688946  916191 ssh_runner.go:195] Run: which crictl
	I0731 12:10:22.693616  916191 command_runner.go:130] > /usr/bin/crictl
	I0731 12:10:22.693776  916191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:10:22.735806  916191 command_runner.go:130] > Version:  0.1.0
	I0731 12:10:22.735868  916191 command_runner.go:130] > RuntimeName:  cri-o
	I0731 12:10:22.735893  916191 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0731 12:10:22.735915  916191 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 12:10:22.738518  916191 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 12:10:22.738668  916191 ssh_runner.go:195] Run: crio --version
	I0731 12:10:22.785301  916191 command_runner.go:130] > crio version 1.24.6
	I0731 12:10:22.785372  916191 command_runner.go:130] > Version:          1.24.6
	I0731 12:10:22.785403  916191 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 12:10:22.785435  916191 command_runner.go:130] > GitTreeState:     clean
	I0731 12:10:22.785459  916191 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 12:10:22.785482  916191 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 12:10:22.785504  916191 command_runner.go:130] > Compiler:         gc
	I0731 12:10:22.785537  916191 command_runner.go:130] > Platform:         linux/arm64
	I0731 12:10:22.785560  916191 command_runner.go:130] > Linkmode:         dynamic
	I0731 12:10:22.785584  916191 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 12:10:22.785604  916191 command_runner.go:130] > SeccompEnabled:   true
	I0731 12:10:22.785634  916191 command_runner.go:130] > AppArmorEnabled:  false
	I0731 12:10:22.787286  916191 ssh_runner.go:195] Run: crio --version
	I0731 12:10:22.831770  916191 command_runner.go:130] > crio version 1.24.6
	I0731 12:10:22.831839  916191 command_runner.go:130] > Version:          1.24.6
	I0731 12:10:22.831862  916191 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 12:10:22.831883  916191 command_runner.go:130] > GitTreeState:     clean
	I0731 12:10:22.831920  916191 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 12:10:22.831947  916191 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 12:10:22.831968  916191 command_runner.go:130] > Compiler:         gc
	I0731 12:10:22.831989  916191 command_runner.go:130] > Platform:         linux/arm64
	I0731 12:10:22.832022  916191 command_runner.go:130] > Linkmode:         dynamic
	I0731 12:10:22.832050  916191 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 12:10:22.832100  916191 command_runner.go:130] > SeccompEnabled:   true
	I0731 12:10:22.832140  916191 command_runner.go:130] > AppArmorEnabled:  false
	I0731 12:10:22.837818  916191 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 12:10:22.839434  916191 cli_runner.go:164] Run: docker network inspect multinode-951087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:10:22.861662  916191 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0731 12:10:22.866448  916191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:10:22.880456  916191 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:10:22.880535  916191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 12:10:22.943120  916191 command_runner.go:130] > {
	I0731 12:10:22.943145  916191 command_runner.go:130] >   "images": [
	I0731 12:10:22.943150  916191 command_runner.go:130] >     {
	I0731 12:10:22.943160  916191 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0731 12:10:22.943165  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943182  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0731 12:10:22.943187  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943193  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943208  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0731 12:10:22.943217  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0731 12:10:22.943225  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943230  916191 command_runner.go:130] >       "size": "60881430",
	I0731 12:10:22.943235  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.943242  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943250  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943258  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943262  916191 command_runner.go:130] >     },
	I0731 12:10:22.943276  916191 command_runner.go:130] >     {
	I0731 12:10:22.943283  916191 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0731 12:10:22.943291  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943297  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 12:10:22.943302  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943311  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943321  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0731 12:10:22.943334  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0731 12:10:22.943339  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943350  916191 command_runner.go:130] >       "size": "29037500",
	I0731 12:10:22.943355  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.943362  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943367  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943372  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943377  916191 command_runner.go:130] >     },
	I0731 12:10:22.943381  916191 command_runner.go:130] >     {
	I0731 12:10:22.943390  916191 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0731 12:10:22.943399  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943406  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0731 12:10:22.943410  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943415  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943428  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0731 12:10:22.943438  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0731 12:10:22.943446  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943451  916191 command_runner.go:130] >       "size": "51393451",
	I0731 12:10:22.943456  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.943461  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943467  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943472  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943484  916191 command_runner.go:130] >     },
	I0731 12:10:22.943489  916191 command_runner.go:130] >     {
	I0731 12:10:22.943500  916191 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0731 12:10:22.943505  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943511  916191 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0731 12:10:22.943520  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943525  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943540  916191 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0731 12:10:22.943551  916191 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0731 12:10:22.943560  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943566  916191 command_runner.go:130] >       "size": "182283991",
	I0731 12:10:22.943573  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.943582  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.943586  916191 command_runner.go:130] >       },
	I0731 12:10:22.943592  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943599  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943606  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943613  916191 command_runner.go:130] >     },
	I0731 12:10:22.943618  916191 command_runner.go:130] >     {
	I0731 12:10:22.943626  916191 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0731 12:10:22.943633  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943640  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0731 12:10:22.943646  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943655  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943678  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0731 12:10:22.943691  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0731 12:10:22.943702  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943711  916191 command_runner.go:130] >       "size": "116204496",
	I0731 12:10:22.943716  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.943731  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.943736  916191 command_runner.go:130] >       },
	I0731 12:10:22.943742  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943753  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943758  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943763  916191 command_runner.go:130] >     },
	I0731 12:10:22.943767  916191 command_runner.go:130] >     {
	I0731 12:10:22.943778  916191 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0731 12:10:22.943789  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943798  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0731 12:10:22.943804  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943810  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943822  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0731 12:10:22.943834  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0731 12:10:22.943839  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943845  916191 command_runner.go:130] >       "size": "108667702",
	I0731 12:10:22.943853  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.943858  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.943866  916191 command_runner.go:130] >       },
	I0731 12:10:22.943876  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.943881  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.943893  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.943900  916191 command_runner.go:130] >     },
	I0731 12:10:22.943904  916191 command_runner.go:130] >     {
	I0731 12:10:22.943912  916191 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0731 12:10:22.943917  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.943926  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0731 12:10:22.943930  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943935  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.943947  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0731 12:10:22.943964  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0731 12:10:22.943972  916191 command_runner.go:130] >       ],
	I0731 12:10:22.943977  916191 command_runner.go:130] >       "size": "68099991",
	I0731 12:10:22.943997  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.944002  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.944007  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.944015  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.944022  916191 command_runner.go:130] >     },
	I0731 12:10:22.944032  916191 command_runner.go:130] >     {
	I0731 12:10:22.944042  916191 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0731 12:10:22.944047  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.944054  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0731 12:10:22.944076  916191 command_runner.go:130] >       ],
	I0731 12:10:22.944085  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.944239  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0731 12:10:22.944253  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0731 12:10:22.944262  916191 command_runner.go:130] >       ],
	I0731 12:10:22.944267  916191 command_runner.go:130] >       "size": "57615158",
	I0731 12:10:22.944272  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.944277  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.944281  916191 command_runner.go:130] >       },
	I0731 12:10:22.944286  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.944295  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.944301  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.944313  916191 command_runner.go:130] >     },
	I0731 12:10:22.944319  916191 command_runner.go:130] >     {
	I0731 12:10:22.944329  916191 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0731 12:10:22.944343  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.944377  916191 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 12:10:22.944385  916191 command_runner.go:130] >       ],
	I0731 12:10:22.944390  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.944399  916191 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0731 12:10:22.944412  916191 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0731 12:10:22.944421  916191 command_runner.go:130] >       ],
	I0731 12:10:22.944431  916191 command_runner.go:130] >       "size": "520014",
	I0731 12:10:22.944439  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.944445  916191 command_runner.go:130] >         "value": "65535"
	I0731 12:10:22.944456  916191 command_runner.go:130] >       },
	I0731 12:10:22.944461  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.944466  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.944471  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.944475  916191 command_runner.go:130] >     }
	I0731 12:10:22.944482  916191 command_runner.go:130] >   ]
	I0731 12:10:22.944486  916191 command_runner.go:130] > }
	I0731 12:10:22.947635  916191 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 12:10:22.947657  916191 crio.go:415] Images already preloaded, skipping extraction
	I0731 12:10:22.947719  916191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 12:10:22.989604  916191 command_runner.go:130] > {
	I0731 12:10:22.989624  916191 command_runner.go:130] >   "images": [
	I0731 12:10:22.989630  916191 command_runner.go:130] >     {
	I0731 12:10:22.989639  916191 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0731 12:10:22.989645  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.989652  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0731 12:10:22.989656  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989662  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.989672  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0731 12:10:22.989684  916191 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0731 12:10:22.989689  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989699  916191 command_runner.go:130] >       "size": "60881430",
	I0731 12:10:22.989704  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.989712  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.989720  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.989725  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.989732  916191 command_runner.go:130] >     },
	I0731 12:10:22.989736  916191 command_runner.go:130] >     {
	I0731 12:10:22.989744  916191 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0731 12:10:22.989753  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.989762  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 12:10:22.989769  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989775  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.989785  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0731 12:10:22.989804  916191 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0731 12:10:22.989809  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989818  916191 command_runner.go:130] >       "size": "29037500",
	I0731 12:10:22.989829  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.989834  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.989843  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.989849  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.989853  916191 command_runner.go:130] >     },
	I0731 12:10:22.989858  916191 command_runner.go:130] >     {
	I0731 12:10:22.989866  916191 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0731 12:10:22.989876  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.989883  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0731 12:10:22.989895  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989901  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.989910  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0731 12:10:22.989923  916191 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0731 12:10:22.989927  916191 command_runner.go:130] >       ],
	I0731 12:10:22.989935  916191 command_runner.go:130] >       "size": "51393451",
	I0731 12:10:22.989940  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.989945  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.989951  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.989956  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.989962  916191 command_runner.go:130] >     },
	I0731 12:10:22.989967  916191 command_runner.go:130] >     {
	I0731 12:10:22.989977  916191 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0731 12:10:22.989984  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.989991  916191 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0731 12:10:22.989995  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990001  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990012  916191 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0731 12:10:22.990022  916191 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0731 12:10:22.990032  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990038  916191 command_runner.go:130] >       "size": "182283991",
	I0731 12:10:22.990046  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.990051  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.990057  916191 command_runner.go:130] >       },
	I0731 12:10:22.990063  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990070  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990075  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990079  916191 command_runner.go:130] >     },
	I0731 12:10:22.990084  916191 command_runner.go:130] >     {
	I0731 12:10:22.990094  916191 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0731 12:10:22.990099  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.990105  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0731 12:10:22.990121  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990129  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990141  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0731 12:10:22.990153  916191 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0731 12:10:22.990157  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990162  916191 command_runner.go:130] >       "size": "116204496",
	I0731 12:10:22.990169  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.990174  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.990178  916191 command_runner.go:130] >       },
	I0731 12:10:22.990187  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990192  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990197  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990202  916191 command_runner.go:130] >     },
	I0731 12:10:22.990208  916191 command_runner.go:130] >     {
	I0731 12:10:22.990216  916191 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0731 12:10:22.990224  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.990230  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0731 12:10:22.990235  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990243  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990252  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0731 12:10:22.990265  916191 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0731 12:10:22.990270  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990275  916191 command_runner.go:130] >       "size": "108667702",
	I0731 12:10:22.990280  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.990285  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.990294  916191 command_runner.go:130] >       },
	I0731 12:10:22.990299  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990307  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990312  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990316  916191 command_runner.go:130] >     },
	I0731 12:10:22.990321  916191 command_runner.go:130] >     {
	I0731 12:10:22.990331  916191 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0731 12:10:22.990336  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.990344  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0731 12:10:22.990348  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990353  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990362  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0731 12:10:22.990372  916191 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0731 12:10:22.990380  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990385  916191 command_runner.go:130] >       "size": "68099991",
	I0731 12:10:22.990389  916191 command_runner.go:130] >       "uid": null,
	I0731 12:10:22.990395  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990402  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990407  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990412  916191 command_runner.go:130] >     },
	I0731 12:10:22.990417  916191 command_runner.go:130] >     {
	I0731 12:10:22.990427  916191 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0731 12:10:22.990434  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.990440  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0731 12:10:22.990445  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990450  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990494  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0731 12:10:22.990506  916191 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0731 12:10:22.990510  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990515  916191 command_runner.go:130] >       "size": "57615158",
	I0731 12:10:22.990519  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.990524  916191 command_runner.go:130] >         "value": "0"
	I0731 12:10:22.990528  916191 command_runner.go:130] >       },
	I0731 12:10:22.990533  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990538  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990542  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990547  916191 command_runner.go:130] >     },
	I0731 12:10:22.990551  916191 command_runner.go:130] >     {
	I0731 12:10:22.990559  916191 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0731 12:10:22.990564  916191 command_runner.go:130] >       "repoTags": [
	I0731 12:10:22.990570  916191 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 12:10:22.990587  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990592  916191 command_runner.go:130] >       "repoDigests": [
	I0731 12:10:22.990601  916191 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0731 12:10:22.990612  916191 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0731 12:10:22.990616  916191 command_runner.go:130] >       ],
	I0731 12:10:22.990624  916191 command_runner.go:130] >       "size": "520014",
	I0731 12:10:22.990629  916191 command_runner.go:130] >       "uid": {
	I0731 12:10:22.990634  916191 command_runner.go:130] >         "value": "65535"
	I0731 12:10:22.990639  916191 command_runner.go:130] >       },
	I0731 12:10:22.990644  916191 command_runner.go:130] >       "username": "",
	I0731 12:10:22.990649  916191 command_runner.go:130] >       "spec": null,
	I0731 12:10:22.990655  916191 command_runner.go:130] >       "pinned": false
	I0731 12:10:22.990661  916191 command_runner.go:130] >     }
	I0731 12:10:22.990666  916191 command_runner.go:130] >   ]
	I0731 12:10:22.990672  916191 command_runner.go:130] > }
	I0731 12:10:22.994309  916191 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 12:10:22.994329  916191 cache_images.go:84] Images are preloaded, skipping loading
	I0731 12:10:22.994409  916191 ssh_runner.go:195] Run: crio config
	I0731 12:10:23.043516  916191 command_runner.go:130] ! time="2023-07-31 12:10:23.043071671Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0731 12:10:23.043784  916191 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 12:10:23.050162  916191 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 12:10:23.050198  916191 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 12:10:23.050208  916191 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 12:10:23.050224  916191 command_runner.go:130] > #
	I0731 12:10:23.050238  916191 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 12:10:23.050249  916191 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 12:10:23.050257  916191 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 12:10:23.050277  916191 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 12:10:23.050286  916191 command_runner.go:130] > # reload'.
	I0731 12:10:23.050294  916191 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 12:10:23.050304  916191 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 12:10:23.050312  916191 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 12:10:23.050320  916191 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 12:10:23.050326  916191 command_runner.go:130] > [crio]
	I0731 12:10:23.050335  916191 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 12:10:23.050365  916191 command_runner.go:130] > # containers images, in this directory.
	I0731 12:10:23.050377  916191 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0731 12:10:23.050386  916191 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 12:10:23.050403  916191 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0731 12:10:23.050412  916191 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 12:10:23.050426  916191 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 12:10:23.050443  916191 command_runner.go:130] > # storage_driver = "vfs"
	I0731 12:10:23.050450  916191 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 12:10:23.050465  916191 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 12:10:23.050471  916191 command_runner.go:130] > # storage_option = [
	I0731 12:10:23.050479  916191 command_runner.go:130] > # ]
	I0731 12:10:23.050488  916191 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 12:10:23.050506  916191 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 12:10:23.050517  916191 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 12:10:23.050524  916191 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 12:10:23.050535  916191 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 12:10:23.050540  916191 command_runner.go:130] > # always happen on a node reboot
	I0731 12:10:23.050549  916191 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 12:10:23.050556  916191 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 12:10:23.050566  916191 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 12:10:23.050581  916191 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 12:10:23.050590  916191 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0731 12:10:23.050600  916191 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 12:10:23.050613  916191 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 12:10:23.050618  916191 command_runner.go:130] > # internal_wipe = true
	I0731 12:10:23.050630  916191 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 12:10:23.050637  916191 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 12:10:23.050654  916191 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 12:10:23.050663  916191 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 12:10:23.050671  916191 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 12:10:23.050678  916191 command_runner.go:130] > [crio.api]
	I0731 12:10:23.050685  916191 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 12:10:23.050693  916191 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 12:10:23.050700  916191 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 12:10:23.050710  916191 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 12:10:23.050724  916191 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 12:10:23.050734  916191 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 12:10:23.050739  916191 command_runner.go:130] > # stream_port = "0"
	I0731 12:10:23.050748  916191 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 12:10:23.050753  916191 command_runner.go:130] > # stream_enable_tls = false
	I0731 12:10:23.050762  916191 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 12:10:23.050767  916191 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 12:10:23.050777  916191 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 12:10:23.050785  916191 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 12:10:23.050798  916191 command_runner.go:130] > # minutes.
	I0731 12:10:23.050807  916191 command_runner.go:130] > # stream_tls_cert = ""
	I0731 12:10:23.050815  916191 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 12:10:23.050824  916191 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 12:10:23.050830  916191 command_runner.go:130] > # stream_tls_key = ""
	I0731 12:10:23.050841  916191 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 12:10:23.050849  916191 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 12:10:23.050858  916191 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 12:10:23.050870  916191 command_runner.go:130] > # stream_tls_ca = ""
	I0731 12:10:23.050880  916191 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 12:10:23.050889  916191 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0731 12:10:23.050899  916191 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 12:10:23.050907  916191 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0731 12:10:23.050924  916191 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 12:10:23.050934  916191 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 12:10:23.050947  916191 command_runner.go:130] > [crio.runtime]
	I0731 12:10:23.050959  916191 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 12:10:23.050966  916191 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 12:10:23.050974  916191 command_runner.go:130] > # "nofile=1024:2048"
	I0731 12:10:23.050983  916191 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 12:10:23.050991  916191 command_runner.go:130] > # default_ulimits = [
	I0731 12:10:23.050996  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051004  916191 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 12:10:23.051012  916191 command_runner.go:130] > # no_pivot = false
	I0731 12:10:23.051025  916191 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 12:10:23.051037  916191 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 12:10:23.051043  916191 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 12:10:23.051053  916191 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 12:10:23.051060  916191 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 12:10:23.051071  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 12:10:23.051100  916191 command_runner.go:130] > # conmon = ""
	I0731 12:10:23.051110  916191 command_runner.go:130] > # Cgroup setting for conmon
	I0731 12:10:23.051118  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 12:10:23.051124  916191 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 12:10:23.051134  916191 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 12:10:23.051144  916191 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 12:10:23.051152  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 12:10:23.051159  916191 command_runner.go:130] > # conmon_env = [
	I0731 12:10:23.051163  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051176  916191 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 12:10:23.051191  916191 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 12:10:23.051199  916191 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 12:10:23.051207  916191 command_runner.go:130] > # default_env = [
	I0731 12:10:23.051211  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051221  916191 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 12:10:23.051229  916191 command_runner.go:130] > # selinux = false
	I0731 12:10:23.051237  916191 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 12:10:23.051253  916191 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 12:10:23.051263  916191 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 12:10:23.051269  916191 command_runner.go:130] > # seccomp_profile = ""
	I0731 12:10:23.051278  916191 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 12:10:23.051285  916191 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 12:10:23.051296  916191 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 12:10:23.051303  916191 command_runner.go:130] > # which might increase security.
	I0731 12:10:23.051309  916191 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0731 12:10:23.051325  916191 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 12:10:23.051337  916191 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 12:10:23.051345  916191 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 12:10:23.051358  916191 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 12:10:23.051370  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:10:23.051377  916191 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 12:10:23.051384  916191 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 12:10:23.051399  916191 command_runner.go:130] > # the cgroup blockio controller.
	I0731 12:10:23.051407  916191 command_runner.go:130] > # blockio_config_file = ""
	I0731 12:10:23.051415  916191 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 12:10:23.051423  916191 command_runner.go:130] > # irqbalance daemon.
	I0731 12:10:23.051430  916191 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 12:10:23.051441  916191 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 12:10:23.051447  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:10:23.051455  916191 command_runner.go:130] > # rdt_config_file = ""
	I0731 12:10:23.051465  916191 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 12:10:23.051476  916191 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 12:10:23.051488  916191 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 12:10:23.051493  916191 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 12:10:23.051504  916191 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 12:10:23.051511  916191 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 12:10:23.051520  916191 command_runner.go:130] > # will be added.
	I0731 12:10:23.051525  916191 command_runner.go:130] > # default_capabilities = [
	I0731 12:10:23.051533  916191 command_runner.go:130] > # 	"CHOWN",
	I0731 12:10:23.051540  916191 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 12:10:23.051555  916191 command_runner.go:130] > # 	"FSETID",
	I0731 12:10:23.051561  916191 command_runner.go:130] > # 	"FOWNER",
	I0731 12:10:23.051565  916191 command_runner.go:130] > # 	"SETGID",
	I0731 12:10:23.051572  916191 command_runner.go:130] > # 	"SETUID",
	I0731 12:10:23.051580  916191 command_runner.go:130] > # 	"SETPCAP",
	I0731 12:10:23.051585  916191 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 12:10:23.051589  916191 command_runner.go:130] > # 	"KILL",
	I0731 12:10:23.051597  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051606  916191 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 12:10:23.051622  916191 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 12:10:23.051632  916191 command_runner.go:130] > # add_inheritable_capabilities = true
	I0731 12:10:23.051640  916191 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 12:10:23.051650  916191 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 12:10:23.051657  916191 command_runner.go:130] > # default_sysctls = [
	I0731 12:10:23.051664  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051671  916191 command_runner.go:130] > # List of devices on the host that a
	I0731 12:10:23.051681  916191 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 12:10:23.051686  916191 command_runner.go:130] > # allowed_devices = [
	I0731 12:10:23.051699  916191 command_runner.go:130] > # 	"/dev/fuse",
	I0731 12:10:23.051706  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051714  916191 command_runner.go:130] > # List of additional devices. specified as
	I0731 12:10:23.051737  916191 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 12:10:23.051747  916191 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 12:10:23.051755  916191 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 12:10:23.051770  916191 command_runner.go:130] > # additional_devices = [
	I0731 12:10:23.051777  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051783  916191 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 12:10:23.051790  916191 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 12:10:23.051795  916191 command_runner.go:130] > # 	"/etc/cdi",
	I0731 12:10:23.051802  916191 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 12:10:23.051806  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051814  916191 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 12:10:23.051823  916191 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 12:10:23.051832  916191 command_runner.go:130] > # Defaults to false.
	I0731 12:10:23.051846  916191 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 12:10:23.051857  916191 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 12:10:23.051865  916191 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 12:10:23.051873  916191 command_runner.go:130] > # hooks_dir = [
	I0731 12:10:23.051879  916191 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 12:10:23.051886  916191 command_runner.go:130] > # ]
	I0731 12:10:23.051894  916191 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 12:10:23.051904  916191 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 12:10:23.051911  916191 command_runner.go:130] > # its default mounts from the following two files:
	I0731 12:10:23.051921  916191 command_runner.go:130] > #
	I0731 12:10:23.051933  916191 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 12:10:23.051942  916191 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 12:10:23.051953  916191 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 12:10:23.051957  916191 command_runner.go:130] > #
	I0731 12:10:23.051967  916191 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 12:10:23.051975  916191 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 12:10:23.051987  916191 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 12:10:23.052022  916191 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 12:10:23.052027  916191 command_runner.go:130] > #
	I0731 12:10:23.052034  916191 command_runner.go:130] > # default_mounts_file = ""
	I0731 12:10:23.052044  916191 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 12:10:23.052052  916191 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 12:10:23.052061  916191 command_runner.go:130] > # pids_limit = 0
	I0731 12:10:23.052069  916191 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 12:10:23.052085  916191 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 12:10:23.052097  916191 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 12:10:23.052128  916191 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 12:10:23.052137  916191 command_runner.go:130] > # log_size_max = -1
	I0731 12:10:23.052146  916191 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0731 12:10:23.052153  916191 command_runner.go:130] > # log_to_journald = false
	I0731 12:10:23.052164  916191 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 12:10:23.052173  916191 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 12:10:23.052180  916191 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 12:10:23.052189  916191 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 12:10:23.052201  916191 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 12:10:23.052213  916191 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 12:10:23.052220  916191 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 12:10:23.052227  916191 command_runner.go:130] > # read_only = false
	I0731 12:10:23.052235  916191 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 12:10:23.052245  916191 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 12:10:23.052250  916191 command_runner.go:130] > # live configuration reload.
	I0731 12:10:23.052257  916191 command_runner.go:130] > # log_level = "info"
	I0731 12:10:23.052264  916191 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 12:10:23.052278  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:10:23.052286  916191 command_runner.go:130] > # log_filter = ""
	I0731 12:10:23.052294  916191 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 12:10:23.052305  916191 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 12:10:23.052310  916191 command_runner.go:130] > # separated by comma.
	I0731 12:10:23.052315  916191 command_runner.go:130] > # uid_mappings = ""
	I0731 12:10:23.052325  916191 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 12:10:23.052335  916191 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 12:10:23.052349  916191 command_runner.go:130] > # separated by comma.
	I0731 12:10:23.052358  916191 command_runner.go:130] > # gid_mappings = ""
	I0731 12:10:23.052366  916191 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 12:10:23.052377  916191 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 12:10:23.052384  916191 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 12:10:23.052393  916191 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 12:10:23.052400  916191 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 12:10:23.052408  916191 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 12:10:23.052424  916191 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 12:10:23.052432  916191 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 12:10:23.052440  916191 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 12:10:23.052449  916191 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 12:10:23.052459  916191 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 12:10:23.052467  916191 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 12:10:23.052475  916191 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 12:10:23.052489  916191 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 12:10:23.052502  916191 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 12:10:23.052514  916191 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 12:10:23.052519  916191 command_runner.go:130] > # drop_infra_ctr = true
	I0731 12:10:23.052530  916191 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 12:10:23.052539  916191 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 12:10:23.052551  916191 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 12:10:23.052557  916191 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 12:10:23.052573  916191 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 12:10:23.052579  916191 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 12:10:23.052588  916191 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 12:10:23.052596  916191 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 12:10:23.052604  916191 command_runner.go:130] > # pinns_path = ""
	I0731 12:10:23.052613  916191 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 12:10:23.052624  916191 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0731 12:10:23.052632  916191 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0731 12:10:23.052646  916191 command_runner.go:130] > # default_runtime = "runc"
	I0731 12:10:23.052656  916191 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 12:10:23.052665  916191 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 12:10:23.052680  916191 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0731 12:10:23.052687  916191 command_runner.go:130] > # creation as a file is not desired either.
	I0731 12:10:23.052699  916191 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 12:10:23.052706  916191 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 12:10:23.052719  916191 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 12:10:23.052727  916191 command_runner.go:130] > # ]
	I0731 12:10:23.052735  916191 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 12:10:23.052750  916191 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 12:10:23.052758  916191 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0731 12:10:23.052769  916191 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0731 12:10:23.052773  916191 command_runner.go:130] > #
	I0731 12:10:23.052779  916191 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0731 12:10:23.052787  916191 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0731 12:10:23.052798  916191 command_runner.go:130] > #  runtime_type = "oci"
	I0731 12:10:23.052810  916191 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0731 12:10:23.052816  916191 command_runner.go:130] > #  privileged_without_host_devices = false
	I0731 12:10:23.052825  916191 command_runner.go:130] > #  allowed_annotations = []
	I0731 12:10:23.052829  916191 command_runner.go:130] > # Where:
	I0731 12:10:23.052836  916191 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0731 12:10:23.052847  916191 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0731 12:10:23.052855  916191 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 12:10:23.052874  916191 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 12:10:23.052881  916191 command_runner.go:130] > #   in $PATH.
	I0731 12:10:23.052891  916191 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0731 12:10:23.052897  916191 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 12:10:23.052908  916191 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0731 12:10:23.052912  916191 command_runner.go:130] > #   state.
	I0731 12:10:23.052922  916191 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 12:10:23.052930  916191 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 12:10:23.052946  916191 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 12:10:23.052956  916191 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 12:10:23.052964  916191 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 12:10:23.052972  916191 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 12:10:23.052981  916191 command_runner.go:130] > #   The currently recognized values are:
	I0731 12:10:23.052989  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 12:10:23.053001  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 12:10:23.053008  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 12:10:23.053024  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 12:10:23.053036  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 12:10:23.053043  916191 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 12:10:23.053051  916191 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 12:10:23.053063  916191 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0731 12:10:23.053069  916191 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 12:10:23.053077  916191 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 12:10:23.053083  916191 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0731 12:10:23.053095  916191 command_runner.go:130] > runtime_type = "oci"
	I0731 12:10:23.053101  916191 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 12:10:23.053108  916191 command_runner.go:130] > runtime_config_path = ""
	I0731 12:10:23.053113  916191 command_runner.go:130] > monitor_path = ""
	I0731 12:10:23.053123  916191 command_runner.go:130] > monitor_cgroup = ""
	I0731 12:10:23.053128  916191 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 12:10:23.053150  916191 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0731 12:10:23.053167  916191 command_runner.go:130] > # running containers
	I0731 12:10:23.053176  916191 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0731 12:10:23.053184  916191 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0731 12:10:23.053195  916191 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0731 12:10:23.053203  916191 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0731 12:10:23.053211  916191 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0731 12:10:23.053218  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0731 12:10:23.053226  916191 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0731 12:10:23.053232  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0731 12:10:23.053246  916191 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0731 12:10:23.053256  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0731 12:10:23.053264  916191 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 12:10:23.053274  916191 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 12:10:23.053282  916191 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 12:10:23.053295  916191 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 12:10:23.053304  916191 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 12:10:23.053321  916191 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 12:10:23.053337  916191 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 12:10:23.053351  916191 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 12:10:23.053361  916191 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 12:10:23.053373  916191 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 12:10:23.053378  916191 command_runner.go:130] > # Example:
	I0731 12:10:23.053384  916191 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 12:10:23.053400  916191 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 12:10:23.053407  916191 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 12:10:23.053416  916191 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 12:10:23.053421  916191 command_runner.go:130] > # cpuset = 0
	I0731 12:10:23.053429  916191 command_runner.go:130] > # cpushares = "0-1"
	I0731 12:10:23.053433  916191 command_runner.go:130] > # Where:
	I0731 12:10:23.053439  916191 command_runner.go:130] > # The workload name is workload-type.
	I0731 12:10:23.053451  916191 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 12:10:23.053458  916191 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 12:10:23.053472  916191 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 12:10:23.053486  916191 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 12:10:23.053496  916191 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 12:10:23.053503  916191 command_runner.go:130] > # 
	I0731 12:10:23.053512  916191 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 12:10:23.053518  916191 command_runner.go:130] > #
	I0731 12:10:23.053525  916191 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 12:10:23.053543  916191 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 12:10:23.053552  916191 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 12:10:23.053559  916191 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 12:10:23.053571  916191 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 12:10:23.053576  916191 command_runner.go:130] > [crio.image]
	I0731 12:10:23.053587  916191 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 12:10:23.053593  916191 command_runner.go:130] > # default_transport = "docker://"
	I0731 12:10:23.053603  916191 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 12:10:23.053617  916191 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 12:10:23.053625  916191 command_runner.go:130] > # global_auth_file = ""
	I0731 12:10:23.053631  916191 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 12:10:23.053637  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:10:23.053643  916191 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0731 12:10:23.053654  916191 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 12:10:23.053662  916191 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 12:10:23.053671  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:10:23.053677  916191 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 12:10:23.053692  916191 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 12:10:23.053703  916191 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 12:10:23.053711  916191 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 12:10:23.053718  916191 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 12:10:23.053729  916191 command_runner.go:130] > # pause_command = "/pause"
	I0731 12:10:23.053736  916191 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 12:10:23.053746  916191 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 12:10:23.053754  916191 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 12:10:23.053765  916191 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 12:10:23.053772  916191 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 12:10:23.053786  916191 command_runner.go:130] > # signature_policy = ""
	I0731 12:10:23.053796  916191 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 12:10:23.053805  916191 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 12:10:23.053813  916191 command_runner.go:130] > # changing them here.
	I0731 12:10:23.053829  916191 command_runner.go:130] > # insecure_registries = [
	I0731 12:10:23.053837  916191 command_runner.go:130] > # ]
	I0731 12:10:23.053845  916191 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 12:10:23.053854  916191 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 12:10:23.053859  916191 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 12:10:23.053868  916191 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 12:10:23.053873  916191 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 12:10:23.053881  916191 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 12:10:23.053887  916191 command_runner.go:130] > # CNI plugins.
	I0731 12:10:23.053895  916191 command_runner.go:130] > [crio.network]
	I0731 12:10:23.053902  916191 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 12:10:23.053916  916191 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 12:10:23.053924  916191 command_runner.go:130] > # cni_default_network = ""
	I0731 12:10:23.053931  916191 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 12:10:23.053939  916191 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 12:10:23.053946  916191 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 12:10:23.053953  916191 command_runner.go:130] > # plugin_dirs = [
	I0731 12:10:23.053959  916191 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 12:10:23.053963  916191 command_runner.go:130] > # ]
	I0731 12:10:23.053970  916191 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 12:10:23.053978  916191 command_runner.go:130] > [crio.metrics]
	I0731 12:10:23.053984  916191 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 12:10:23.053991  916191 command_runner.go:130] > # enable_metrics = false
	I0731 12:10:23.053997  916191 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 12:10:23.054005  916191 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 12:10:23.054013  916191 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 12:10:23.054023  916191 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 12:10:23.054031  916191 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 12:10:23.054040  916191 command_runner.go:130] > # metrics_collectors = [
	I0731 12:10:23.054045  916191 command_runner.go:130] > # 	"operations",
	I0731 12:10:23.054051  916191 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 12:10:23.054057  916191 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 12:10:23.054064  916191 command_runner.go:130] > # 	"operations_errors",
	I0731 12:10:23.054070  916191 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 12:10:23.054078  916191 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 12:10:23.054085  916191 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 12:10:23.054092  916191 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 12:10:23.054097  916191 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 12:10:23.054103  916191 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 12:10:23.054110  916191 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 12:10:23.054115  916191 command_runner.go:130] > # 	"containers_oom_total",
	I0731 12:10:23.054121  916191 command_runner.go:130] > # 	"containers_oom",
	I0731 12:10:23.054134  916191 command_runner.go:130] > # 	"processes_defunct",
	I0731 12:10:23.054140  916191 command_runner.go:130] > # 	"operations_total",
	I0731 12:10:23.054146  916191 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 12:10:23.054157  916191 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 12:10:23.054163  916191 command_runner.go:130] > # 	"operations_errors_total",
	I0731 12:10:23.054171  916191 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 12:10:23.054176  916191 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 12:10:23.054185  916191 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 12:10:23.054191  916191 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 12:10:23.054196  916191 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 12:10:23.054204  916191 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 12:10:23.054208  916191 command_runner.go:130] > # ]
	I0731 12:10:23.054214  916191 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 12:10:23.054219  916191 command_runner.go:130] > # metrics_port = 9090
	I0731 12:10:23.054226  916191 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 12:10:23.054234  916191 command_runner.go:130] > # metrics_socket = ""
	I0731 12:10:23.054240  916191 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 12:10:23.054249  916191 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 12:10:23.054260  916191 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 12:10:23.054266  916191 command_runner.go:130] > # certificate on any modification event.
	I0731 12:10:23.054273  916191 command_runner.go:130] > # metrics_cert = ""
	I0731 12:10:23.054280  916191 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 12:10:23.054289  916191 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 12:10:23.054294  916191 command_runner.go:130] > # metrics_key = ""
	I0731 12:10:23.054301  916191 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 12:10:23.054306  916191 command_runner.go:130] > [crio.tracing]
	I0731 12:10:23.054315  916191 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 12:10:23.054321  916191 command_runner.go:130] > # enable_tracing = false
	I0731 12:10:23.054330  916191 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 12:10:23.054336  916191 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 12:10:23.054344  916191 command_runner.go:130] > # Number of samples to collect per million spans.
	I0731 12:10:23.054350  916191 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 12:10:23.054361  916191 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 12:10:23.054366  916191 command_runner.go:130] > [crio.stats]
	I0731 12:10:23.054375  916191 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 12:10:23.054382  916191 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 12:10:23.054388  916191 command_runner.go:130] > # stats_collection_period = 0
	I0731 12:10:23.054459  916191 cni.go:84] Creating CNI manager for ""
	I0731 12:10:23.054475  916191 cni.go:136] 1 nodes found, recommending kindnet
	I0731 12:10:23.054485  916191 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 12:10:23.054517  916191 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-951087 NodeName:multinode-951087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:10:23.054673  916191 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-951087"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:10:23.054761  916191 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-951087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 12:10:23.054836  916191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 12:10:23.065221  916191 command_runner.go:130] > kubeadm
	I0731 12:10:23.065254  916191 command_runner.go:130] > kubectl
	I0731 12:10:23.065260  916191 command_runner.go:130] > kubelet
	I0731 12:10:23.066633  916191 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:10:23.066716  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:10:23.077867  916191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0731 12:10:23.100658  916191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:10:23.123323  916191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0731 12:10:23.145403  916191 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0731 12:10:23.150262  916191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:10:23.163912  916191 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087 for IP: 192.168.58.2
	I0731 12:10:23.163943  916191 certs.go:190] acquiring lock for shared ca certs: {Name:mk762e840a818dea6b5e9edfaa8822eb28411d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:23.164104  916191 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key
	I0731 12:10:23.164244  916191 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key
	I0731 12:10:23.164296  916191 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key
	I0731 12:10:23.164311  916191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt with IP's: []
	I0731 12:10:23.454345  916191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt ...
	I0731 12:10:23.454378  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt: {Name:mkc06edab991664c59ae8b4596a442ae999eef82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:23.454587  916191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key ...
	I0731 12:10:23.454600  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key: {Name:mkfce0c964b66e47a448caa78fcac8ba430abec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:23.454694  916191 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key.cee25041
	I0731 12:10:23.454711  916191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 12:10:23.879997  916191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt.cee25041 ...
	I0731 12:10:23.880029  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt.cee25041: {Name:mkc58ae280eb1fa54c845bf7fb85cdbb2b6d8c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:23.880231  916191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key.cee25041 ...
	I0731 12:10:23.880244  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key.cee25041: {Name:mk1bd19a131c09362149787f9cc2271ecfeb3a4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:23.880339  916191 certs.go:337] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt
	I0731 12:10:23.880420  916191 certs.go:341] copying /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key
	I0731 12:10:23.880480  916191 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.key
	I0731 12:10:23.880499  916191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.crt with IP's: []
	I0731 12:10:24.206992  916191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.crt ...
	I0731 12:10:24.207024  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.crt: {Name:mkae58af069895cfa6e9f55d8a610edf878c3d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:24.207220  916191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.key ...
	I0731 12:10:24.207234  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.key: {Name:mkec606229b8885926e8314ed20f9391f29077f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:24.207321  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 12:10:24.207343  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 12:10:24.207358  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 12:10:24.207374  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 12:10:24.207387  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 12:10:24.207404  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 12:10:24.207419  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 12:10:24.207430  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 12:10:24.207484  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem (1338 bytes)
	W0731 12:10:24.207525  916191 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550_empty.pem, impossibly tiny 0 bytes
	I0731 12:10:24.207540  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:10:24.207567  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:10:24.207595  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:10:24.207627  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem (1679 bytes)
	I0731 12:10:24.207679  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:10:24.207710  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:10:24.207725  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem -> /usr/share/ca-certificates/852550.pem
	I0731 12:10:24.207739  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /usr/share/ca-certificates/8525502.pem
	I0731 12:10:24.208588  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 12:10:24.241386  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:10:24.271735  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:10:24.301873  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:10:24.331242  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:10:24.360492  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:10:24.391198  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:10:24.422455  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:10:24.453312  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:10:24.482698  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem --> /usr/share/ca-certificates/852550.pem (1338 bytes)
	I0731 12:10:24.512378  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /usr/share/ca-certificates/8525502.pem (1708 bytes)
	I0731 12:10:24.541912  916191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:10:24.563529  916191 ssh_runner.go:195] Run: openssl version
	I0731 12:10:24.570436  916191 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0731 12:10:24.570811  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8525502.pem && ln -fs /usr/share/ca-certificates/8525502.pem /etc/ssl/certs/8525502.pem"
	I0731 12:10:24.582452  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8525502.pem
	I0731 12:10:24.587402  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 12:10:24.587699  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 12:10:24.587772  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8525502.pem
	I0731 12:10:24.596314  916191 command_runner.go:130] > 3ec20f2e
	I0731 12:10:24.596398  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8525502.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:10:24.608091  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:10:24.619892  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:10:24.624719  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:10:24.624744  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:10:24.624824  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:10:24.633313  916191 command_runner.go:130] > b5213941
	I0731 12:10:24.633716  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:10:24.645337  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/852550.pem && ln -fs /usr/share/ca-certificates/852550.pem /etc/ssl/certs/852550.pem"
	I0731 12:10:24.657159  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/852550.pem
	I0731 12:10:24.661644  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 12:10:24.661951  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 12:10:24.662031  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/852550.pem
	I0731 12:10:24.670321  916191 command_runner.go:130] > 51391683
	I0731 12:10:24.670728  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/852550.pem /etc/ssl/certs/51391683.0"
	I0731 12:10:24.682683  916191 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 12:10:24.687242  916191 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 12:10:24.687329  916191 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 12:10:24.687386  916191 kubeadm.go:404] StartCluster: {Name:multinode-951087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:10:24.687490  916191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 12:10:24.687552  916191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 12:10:24.733597  916191 cri.go:89] found id: ""
	I0731 12:10:24.733664  916191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:10:24.744356  916191 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0731 12:10:24.744387  916191 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0731 12:10:24.744397  916191 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0731 12:10:24.744508  916191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:10:24.755311  916191 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 12:10:24.755405  916191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:10:24.766535  916191 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0731 12:10:24.766557  916191 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0731 12:10:24.766566  916191 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0731 12:10:24.766577  916191 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:10:24.766604  916191 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:10:24.766643  916191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 12:10:24.822290  916191 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 12:10:24.822325  916191 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0731 12:10:24.822419  916191 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 12:10:24.822437  916191 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 12:10:24.870243  916191 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 12:10:24.870277  916191 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0731 12:10:24.870340  916191 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0731 12:10:24.870350  916191 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1040-aws
	I0731 12:10:24.870382  916191 kubeadm.go:322] OS: Linux
	I0731 12:10:24.870391  916191 command_runner.go:130] > OS: Linux
	I0731 12:10:24.870444  916191 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 12:10:24.870463  916191 command_runner.go:130] > CGROUPS_CPU: enabled
	I0731 12:10:24.870514  916191 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 12:10:24.870524  916191 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0731 12:10:24.870577  916191 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 12:10:24.870586  916191 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0731 12:10:24.870640  916191 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 12:10:24.870649  916191 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0731 12:10:24.870702  916191 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 12:10:24.870711  916191 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0731 12:10:24.870767  916191 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 12:10:24.870776  916191 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0731 12:10:24.870826  916191 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0731 12:10:24.870837  916191 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0731 12:10:24.870892  916191 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0731 12:10:24.870905  916191 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0731 12:10:24.870958  916191 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0731 12:10:24.870968  916191 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0731 12:10:24.957117  916191 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:10:24.957144  916191 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:10:24.957235  916191 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:10:24.957248  916191 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:10:24.957341  916191 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:10:24.957350  916191 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:10:25.244464  916191 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:10:25.248284  916191 out.go:204]   - Generating certificates and keys ...
	I0731 12:10:25.244807  916191 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:10:25.248455  916191 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 12:10:25.248473  916191 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 12:10:25.248540  916191 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 12:10:25.248560  916191 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 12:10:26.163231  916191 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 12:10:26.163320  916191 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 12:10:26.478805  916191 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 12:10:26.478880  916191 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0731 12:10:26.863017  916191 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 12:10:26.863041  916191 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0731 12:10:27.260918  916191 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 12:10:27.260945  916191 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0731 12:10:27.690730  916191 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 12:10:27.690756  916191 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0731 12:10:27.691070  916191 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-951087] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 12:10:27.691092  916191 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-951087] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 12:10:27.880689  916191 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 12:10:27.880713  916191 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0731 12:10:27.881220  916191 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-951087] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 12:10:27.881243  916191 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-951087] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 12:10:28.073991  916191 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 12:10:28.074015  916191 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 12:10:28.849733  916191 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 12:10:28.849757  916191 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 12:10:29.317969  916191 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 12:10:29.318005  916191 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0731 12:10:29.318271  916191 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:10:29.318288  916191 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:10:29.823410  916191 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:10:29.823438  916191 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:10:30.385850  916191 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:10:30.385880  916191 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:10:30.775173  916191 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:10:30.775201  916191 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:10:31.232248  916191 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:10:31.232311  916191 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:10:31.244597  916191 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:10:31.244623  916191 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:10:31.246163  916191 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:10:31.246185  916191 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:10:31.246222  916191 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 12:10:31.246227  916191 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 12:10:31.352707  916191 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:10:31.355246  916191 out.go:204]   - Booting up control plane ...
	I0731 12:10:31.352826  916191 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:10:31.355345  916191 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:10:31.355357  916191 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:10:31.355455  916191 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:10:31.355461  916191 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:10:31.355523  916191 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:10:31.355533  916191 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:10:31.356334  916191 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:10:31.356351  916191 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:10:31.359065  916191 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:10:31.359112  916191 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:10:38.861051  916191 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502066 seconds
	I0731 12:10:38.861080  916191 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502066 seconds
	I0731 12:10:38.861179  916191 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:10:38.861189  916191 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:10:38.878145  916191 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:10:38.878173  916191 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:10:39.404843  916191 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:10:39.404867  916191 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:10:39.405040  916191 kubeadm.go:322] [mark-control-plane] Marking the node multinode-951087 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:10:39.405052  916191 command_runner.go:130] > [mark-control-plane] Marking the node multinode-951087 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:10:39.916591  916191 kubeadm.go:322] [bootstrap-token] Using token: lyyclb.vjxzc123vfrrsb9o
	I0731 12:10:39.918640  916191 out.go:204]   - Configuring RBAC rules ...
	I0731 12:10:39.916695  916191 command_runner.go:130] > [bootstrap-token] Using token: lyyclb.vjxzc123vfrrsb9o
	I0731 12:10:39.918758  916191 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:10:39.918774  916191 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:10:39.923890  916191 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:10:39.923912  916191 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:10:39.931817  916191 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:10:39.931844  916191 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:10:39.935826  916191 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:10:39.935849  916191 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:10:39.941079  916191 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:10:39.941102  916191 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:10:39.945212  916191 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:10:39.945244  916191 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:10:39.959782  916191 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:10:39.959813  916191 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:10:40.242331  916191 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 12:10:40.242358  916191 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 12:10:40.328719  916191 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 12:10:40.328745  916191 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 12:10:40.329850  916191 kubeadm.go:322] 
	I0731 12:10:40.329924  916191 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 12:10:40.329937  916191 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0731 12:10:40.329944  916191 kubeadm.go:322] 
	I0731 12:10:40.330022  916191 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 12:10:40.330030  916191 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0731 12:10:40.330035  916191 kubeadm.go:322] 
	I0731 12:10:40.330059  916191 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 12:10:40.330066  916191 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0731 12:10:40.330121  916191 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:10:40.330129  916191 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:10:40.330181  916191 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:10:40.330189  916191 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:10:40.330194  916191 kubeadm.go:322] 
	I0731 12:10:40.330244  916191 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 12:10:40.330252  916191 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0731 12:10:40.330256  916191 kubeadm.go:322] 
	I0731 12:10:40.330301  916191 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:10:40.330309  916191 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:10:40.330313  916191 kubeadm.go:322] 
	I0731 12:10:40.330362  916191 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 12:10:40.330370  916191 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0731 12:10:40.330449  916191 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:10:40.330457  916191 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:10:40.330521  916191 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:10:40.330528  916191 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:10:40.330532  916191 kubeadm.go:322] 
	I0731 12:10:40.330610  916191 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:10:40.330618  916191 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:10:40.330689  916191 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 12:10:40.330698  916191 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0731 12:10:40.330702  916191 kubeadm.go:322] 
	I0731 12:10:40.330781  916191 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lyyclb.vjxzc123vfrrsb9o \
	I0731 12:10:40.330789  916191 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token lyyclb.vjxzc123vfrrsb9o \
	I0731 12:10:40.330884  916191 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 \
	I0731 12:10:40.330892  916191 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 \
	I0731 12:10:40.330911  916191 kubeadm.go:322] 	--control-plane 
	I0731 12:10:40.330918  916191 command_runner.go:130] > 	--control-plane 
	I0731 12:10:40.330923  916191 kubeadm.go:322] 
	I0731 12:10:40.331002  916191 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:10:40.331008  916191 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:10:40.331012  916191 kubeadm.go:322] 
	I0731 12:10:40.331097  916191 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lyyclb.vjxzc123vfrrsb9o \
	I0731 12:10:40.331105  916191 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lyyclb.vjxzc123vfrrsb9o \
	I0731 12:10:40.331200  916191 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 
	I0731 12:10:40.331207  916191 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 
	I0731 12:10:40.336307  916191 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 12:10:40.336333  916191 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 12:10:40.336491  916191 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:10:40.336512  916191 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:10:40.336548  916191 cni.go:84] Creating CNI manager for ""
	I0731 12:10:40.336561  916191 cni.go:136] 1 nodes found, recommending kindnet
	I0731 12:10:40.339886  916191 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 12:10:40.341722  916191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 12:10:40.362966  916191 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 12:10:40.362995  916191 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0731 12:10:40.363004  916191 command_runner.go:130] > Device: 3ah/58d	Inode: 5971530     Links: 1
	I0731 12:10:40.363011  916191 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:10:40.363018  916191 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0731 12:10:40.363024  916191 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0731 12:10:40.363030  916191 command_runner.go:130] > Change: 2023-07-31 11:47:52.097661811 +0000
	I0731 12:10:40.363050  916191 command_runner.go:130] >  Birth: 2023-07-31 11:47:52.053662026 +0000
	I0731 12:10:40.364678  916191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 12:10:40.364698  916191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 12:10:40.397344  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 12:10:41.310866  916191 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0731 12:10:41.328091  916191 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0731 12:10:41.338193  916191 command_runner.go:130] > serviceaccount/kindnet created
	I0731 12:10:41.350084  916191 command_runner.go:130] > daemonset.apps/kindnet created
	I0731 12:10:41.355641  916191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:10:41.355779  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=multinode-951087 minikube.k8s.io/updated_at=2023_07_31T12_10_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:41.355820  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:41.496774  916191 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0731 12:10:41.498193  916191 command_runner.go:130] > -16
	I0731 12:10:41.534592  916191 command_runner.go:130] > node/multinode-951087 labeled
	I0731 12:10:41.538473  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:41.538535  916191 ops.go:34] apiserver oom_adj: -16
	I0731 12:10:41.679502  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:41.679592  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:41.776667  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:42.277528  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:42.376504  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:42.777185  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:42.871986  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:43.277655  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:43.370564  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:43.776952  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:43.873007  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:44.277019  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:44.364414  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:44.777832  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:44.866739  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:45.277293  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:45.409966  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:45.777712  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:45.877486  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:46.276909  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:46.373880  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:46.777643  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:46.867763  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:47.277340  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:47.383733  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:47.777419  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:47.873621  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:48.277214  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:48.366359  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:48.777661  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:48.866718  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:49.276916  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:49.381835  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:49.777299  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:49.869586  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:50.277101  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:50.366622  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:50.777718  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:50.891978  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:51.277300  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:51.378718  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:51.777335  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:51.873962  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:52.277367  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:52.380327  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:52.776914  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:52.876875  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:53.277076  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:53.377803  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:53.777496  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:53.882125  916191 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 12:10:54.277779  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:10:54.376840  916191 command_runner.go:130] > NAME      SECRETS   AGE
	I0731 12:10:54.376863  916191 command_runner.go:130] > default   0         1s
	I0731 12:10:54.380817  916191 kubeadm.go:1081] duration metric: took 13.025076589s to wait for elevateKubeSystemPrivileges.
	I0731 12:10:54.380844  916191 kubeadm.go:406] StartCluster complete in 29.693463403s
	I0731 12:10:54.380860  916191 settings.go:142] acquiring lock: {Name:mk829b6893936aa5483dce9aaeef4d670cd88116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:54.380937  916191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:10:54.381582  916191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/kubeconfig: {Name:mk6696558a0c97b92d2f11c98afd477ee2b6ad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:10:54.381834  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 12:10:54.382123  916191 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:10:54.382219  916191 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:10:54.382252  916191 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 12:10:54.382326  916191 addons.go:69] Setting storage-provisioner=true in profile "multinode-951087"
	I0731 12:10:54.382342  916191 addons.go:231] Setting addon storage-provisioner=true in "multinode-951087"
	I0731 12:10:54.382379  916191 host.go:66] Checking if "multinode-951087" exists ...
	I0731 12:10:54.382528  916191 kapi.go:59] client config for multinode-951087: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:10:54.382859  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:54.383768  916191 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 12:10:54.383787  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.383797  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.383804  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.383998  916191 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 12:10:54.384485  916191 addons.go:69] Setting default-storageclass=true in profile "multinode-951087"
	I0731 12:10:54.384510  916191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-951087"
	I0731 12:10:54.384789  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:54.409278  916191 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0731 12:10:54.409298  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.409307  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.409315  916191 round_trippers.go:580]     Content-Length: 291
	I0731 12:10:54.409321  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.409330  916191 round_trippers.go:580]     Audit-Id: a49e6393-0a04-479e-b5df-22c9db38df18
	I0731 12:10:54.409337  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.409343  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.409350  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.409381  916191 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"85b8ff0a-91d2-40c3-9d46-82ccfed95f91","resourceVersion":"355","creationTimestamp":"2023-07-31T12:10:40Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 12:10:54.409764  916191 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"85b8ff0a-91d2-40c3-9d46-82ccfed95f91","resourceVersion":"355","creationTimestamp":"2023-07-31T12:10:40Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 12:10:54.409807  916191 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 12:10:54.409813  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.409821  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.409829  916191 round_trippers.go:473]     Content-Type: application/json
	I0731 12:10:54.409835  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.422075  916191 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 12:10:54.422097  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.422105  916191 round_trippers.go:580]     Content-Length: 291
	I0731 12:10:54.422112  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.422119  916191 round_trippers.go:580]     Audit-Id: ae117c50-3352-462e-b523-6bf2fd0db57f
	I0731 12:10:54.422125  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.422132  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.422138  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.422145  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.422168  916191 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"85b8ff0a-91d2-40c3-9d46-82ccfed95f91","resourceVersion":"356","creationTimestamp":"2023-07-31T12:10:40Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 12:10:54.422301  916191 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 12:10:54.422307  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.422314  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.422321  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.431579  916191 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:10:54.431828  916191 kapi.go:59] client config for multinode-951087: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:10:54.432167  916191 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 12:10:54.432176  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.432185  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.432193  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.448607  916191 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:10:54.450691  916191 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:10:54.450712  916191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:10:54.450779  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:54.449926  916191 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0731 12:10:54.451062  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.451073  916191 round_trippers.go:580]     Content-Length: 109
	I0731 12:10:54.451081  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.451088  916191 round_trippers.go:580]     Audit-Id: 7850b1e6-fdd0-4ad9-8286-58e82555c9b7
	I0731 12:10:54.451094  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.451101  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.451108  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.451114  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.451135  916191 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"357"},"items":[]}
	I0731 12:10:54.451418  916191 addons.go:231] Setting addon default-storageclass=true in "multinode-951087"
	I0731 12:10:54.451445  916191 host.go:66] Checking if "multinode-951087" exists ...
	I0731 12:10:54.451861  916191 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:10:54.449941  916191 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0731 12:10:54.451935  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.451943  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.451950  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.451957  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.451963  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.451970  916191 round_trippers.go:580]     Content-Length: 291
	I0731 12:10:54.451977  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.451983  916191 round_trippers.go:580]     Audit-Id: f36bc9c7-e8fb-4dea-bc06-be12990a0149
	I0731 12:10:54.452001  916191 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"85b8ff0a-91d2-40c3-9d46-82ccfed95f91","resourceVersion":"356","creationTimestamp":"2023-07-31T12:10:40Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 12:10:54.452062  916191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-951087" context rescaled to 1 replicas
	I0731 12:10:54.452081  916191 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 12:10:54.455224  916191 out.go:177] * Verifying Kubernetes components...
	I0731 12:10:54.462997  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:10:54.500297  916191 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:10:54.500317  916191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:10:54.500379  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:10:54.505593  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:54.538985  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:10:54.614801  916191 command_runner.go:130] > apiVersion: v1
	I0731 12:10:54.614822  916191 command_runner.go:130] > data:
	I0731 12:10:54.614827  916191 command_runner.go:130] >   Corefile: |
	I0731 12:10:54.614832  916191 command_runner.go:130] >     .:53 {
	I0731 12:10:54.614837  916191 command_runner.go:130] >         errors
	I0731 12:10:54.614842  916191 command_runner.go:130] >         health {
	I0731 12:10:54.614848  916191 command_runner.go:130] >            lameduck 5s
	I0731 12:10:54.614858  916191 command_runner.go:130] >         }
	I0731 12:10:54.614866  916191 command_runner.go:130] >         ready
	I0731 12:10:54.614874  916191 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0731 12:10:54.614885  916191 command_runner.go:130] >            pods insecure
	I0731 12:10:54.614892  916191 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0731 12:10:54.614902  916191 command_runner.go:130] >            ttl 30
	I0731 12:10:54.614907  916191 command_runner.go:130] >         }
	I0731 12:10:54.614915  916191 command_runner.go:130] >         prometheus :9153
	I0731 12:10:54.614921  916191 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0731 12:10:54.614930  916191 command_runner.go:130] >            max_concurrent 1000
	I0731 12:10:54.614935  916191 command_runner.go:130] >         }
	I0731 12:10:54.614940  916191 command_runner.go:130] >         cache 30
	I0731 12:10:54.614946  916191 command_runner.go:130] >         loop
	I0731 12:10:54.614952  916191 command_runner.go:130] >         reload
	I0731 12:10:54.614959  916191 command_runner.go:130] >         loadbalance
	I0731 12:10:54.614963  916191 command_runner.go:130] >     }
	I0731 12:10:54.614970  916191 command_runner.go:130] > kind: ConfigMap
	I0731 12:10:54.614975  916191 command_runner.go:130] > metadata:
	I0731 12:10:54.614988  916191 command_runner.go:130] >   creationTimestamp: "2023-07-31T12:10:40Z"
	I0731 12:10:54.615000  916191 command_runner.go:130] >   name: coredns
	I0731 12:10:54.615009  916191 command_runner.go:130] >   namespace: kube-system
	I0731 12:10:54.615018  916191 command_runner.go:130] >   resourceVersion: "220"
	I0731 12:10:54.615025  916191 command_runner.go:130] >   uid: ab7ed266-05eb-4c28-9cd7-474c5e72fada
	I0731 12:10:54.618970  916191 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:10:54.619235  916191 kapi.go:59] client config for multinode-951087: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:10:54.619567  916191 node_ready.go:35] waiting up to 6m0s for node "multinode-951087" to be "Ready" ...
	I0731 12:10:54.619653  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:54.619661  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.619670  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.619681  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.620011  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 12:10:54.622599  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:54.622649  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.622687  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.622711  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.622732  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.622767  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.622792  916191 round_trippers.go:580]     Audit-Id: 5bccb25c-a12a-4545-9994-889f34746889
	I0731 12:10:54.622814  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.622957  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:54.623746  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:54.623789  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:54.623815  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:54.623838  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:54.626267  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:54.626326  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:54.626349  916191 round_trippers.go:580]     Audit-Id: 77816e3b-4173-4413-a977-3bef8eb7b280
	I0731 12:10:54.626372  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:54.626408  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:54.626436  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:54.626457  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:54.626491  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:54 GMT
	I0731 12:10:54.626662  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:54.711817  916191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:10:54.735687  916191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:10:55.033124  916191 command_runner.go:130] > configmap/coredns replaced
	I0731 12:10:55.040813  916191 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0731 12:10:55.122328  916191 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0731 12:10:55.127633  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:55.127662  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:55.127674  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:55.127682  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:55.132618  916191 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 12:10:55.132699  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:55.132731  916191 round_trippers.go:580]     Audit-Id: f961b423-cfd9-4cc1-b7f7-68f830c74429
	I0731 12:10:55.132781  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:55.132817  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:55.132866  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:55.132892  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:55.132916  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:55 GMT
	I0731 12:10:55.133092  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:55.338290  916191 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0731 12:10:55.346538  916191 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0731 12:10:55.359606  916191 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 12:10:55.370763  916191 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 12:10:55.382607  916191 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0731 12:10:55.394210  916191 command_runner.go:130] > pod/storage-provisioner created
	I0731 12:10:55.402564  916191 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 12:10:55.405002  916191 addons.go:502] enable addons completed in 1.022738557s: enabled=[default-storageclass storage-provisioner]
	I0731 12:10:55.627476  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:55.627499  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:55.627510  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:55.627517  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:55.630083  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:55.630111  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:55.630120  916191 round_trippers.go:580]     Audit-Id: f202e825-fb23-4f19-83dd-9dd2c8686280
	I0731 12:10:55.630127  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:55.630134  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:55.630141  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:55.630151  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:55.630166  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:55 GMT
	I0731 12:10:55.630420  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:56.127509  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:56.127533  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:56.127543  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:56.127552  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:56.130147  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:56.130169  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:56.130178  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:56.130185  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:56.130191  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:56.130216  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:56.130229  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:56 GMT
	I0731 12:10:56.130237  916191 round_trippers.go:580]     Audit-Id: b4f1ee11-e562-419b-bd90-77ce85e9490e
	I0731 12:10:56.130551  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:56.627323  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:56.627346  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:56.627362  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:56.627370  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:56.630779  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:10:56.630803  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:56.630812  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:56.630822  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:56 GMT
	I0731 12:10:56.630829  916191 round_trippers.go:580]     Audit-Id: 856a8536-29eb-425f-af95-2452d3cb7ea8
	I0731 12:10:56.630835  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:56.630842  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:56.630848  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:56.631062  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:56.631593  916191 node_ready.go:58] node "multinode-951087" has status "Ready":"False"
	I0731 12:10:57.127406  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:57.127433  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.127443  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.127451  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.130208  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:57.130293  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.130303  916191 round_trippers.go:580]     Audit-Id: e5e11795-df67-4936-be12-7a7c52530984
	I0731 12:10:57.130312  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.130320  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.130355  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.130370  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.130378  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.130487  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"325","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0731 12:10:57.628007  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:57.628031  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.628041  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.628049  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.630944  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:57.630985  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.631000  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.631008  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.631015  916191 round_trippers.go:580]     Audit-Id: abedfe96-0e82-41c1-aa84-cc07daba5716
	I0731 12:10:57.631022  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.631029  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.631038  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.631649  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:57.632062  916191 node_ready.go:49] node "multinode-951087" has status "Ready":"True"
	I0731 12:10:57.632081  916191 node_ready.go:38] duration metric: took 3.012492307s waiting for node "multinode-951087" to be "Ready" ...
	I0731 12:10:57.632091  916191 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:10:57.632205  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:10:57.632218  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.632227  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.632235  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.639602  916191 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 12:10:57.639631  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.639641  916191 round_trippers.go:580]     Audit-Id: 087c3f21-4216-4404-8ed6-c9bb815b24c6
	I0731 12:10:57.639648  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.639654  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.639661  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.639668  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.639680  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.642195  916191 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"398","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56435 chars]
	I0731 12:10:57.646597  916191 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace to be "Ready" ...
	I0731 12:10:57.646704  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nb8rj
	I0731 12:10:57.646718  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.646736  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.646747  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.664487  916191 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0731 12:10:57.664517  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.664526  916191 round_trippers.go:580]     Audit-Id: d6aaae18-92c5-44e5-aa28-86b6a45a8367
	I0731 12:10:57.664534  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.664540  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.664547  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.664553  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.664560  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.665814  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"398","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 12:10:57.666444  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:57.666461  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.666470  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.666490  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.669312  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:57.669337  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.669346  916191 round_trippers.go:580]     Audit-Id: cb54776e-5efc-429e-ab41-db4be0ccc156
	I0731 12:10:57.669353  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.669360  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.669367  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.669382  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.669393  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.670139  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:57.670662  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nb8rj
	I0731 12:10:57.670686  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.670696  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.670704  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.673261  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:57.673284  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.673293  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.673300  916191 round_trippers.go:580]     Audit-Id: 4712fc01-a939-41b2-979e-dea64a74c265
	I0731 12:10:57.673308  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.673315  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.673322  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.673340  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.676030  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"398","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 12:10:57.676654  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:57.676673  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:57.676684  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:57.676691  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:57.691512  916191 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 12:10:57.691540  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:57.691556  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:57.691564  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:57 GMT
	I0731 12:10:57.691571  916191 round_trippers.go:580]     Audit-Id: f98eb805-b71e-4362-8033-de8db3f05686
	I0731 12:10:57.691578  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:57.691586  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:57.691593  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:57.692435  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:58.193720  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nb8rj
	I0731 12:10:58.193790  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.193812  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.193832  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.196660  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.196723  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.196746  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.196770  916191 round_trippers.go:580]     Audit-Id: 2992974b-445b-4b1d-bccc-6ecdb31a07a2
	I0731 12:10:58.196804  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.196832  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.196854  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.196876  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.197041  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"398","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 12:10:58.197601  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:58.197619  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.197628  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.197636  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.200075  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.200151  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.200175  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.200195  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.200232  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.200256  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.200269  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.200277  916191 round_trippers.go:580]     Audit-Id: 7100f32b-d683-418b-a06e-28efdf3c2731
	I0731 12:10:58.200513  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:58.693584  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nb8rj
	I0731 12:10:58.693645  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.693678  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.693700  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.696421  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.696492  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.696507  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.696516  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.696523  916191 round_trippers.go:580]     Audit-Id: 89acc4e3-0cdc-43be-99ba-b8c8b7a11b57
	I0731 12:10:58.696529  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.696536  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.696543  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.696700  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"412","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0731 12:10:58.697235  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:58.697252  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.697261  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.697272  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.699620  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.699690  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.699713  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.699791  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.699804  916191 round_trippers.go:580]     Audit-Id: 24e41673-73b1-4570-9589-fdefc6664960
	I0731 12:10:58.699812  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.699819  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.699826  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.699965  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:58.700452  916191 pod_ready.go:92] pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace has status "Ready":"True"
	I0731 12:10:58.700471  916191 pod_ready.go:81] duration metric: took 1.053843249s waiting for pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace to be "Ready" ...
	I0731 12:10:58.700485  916191 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:10:58.700546  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:10:58.700554  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.700562  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.700570  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.703024  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.703051  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.703062  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.703068  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.703076  916191 round_trippers.go:580]     Audit-Id: 681a5426-be61-4ef8-a7dc-48b3e688f3a4
	I0731 12:10:58.703084  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.703092  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.703098  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.703267  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"300","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0731 12:10:58.703729  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:58.703746  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.703755  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.703769  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.706213  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.706274  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.706296  916191 round_trippers.go:580]     Audit-Id: eac5be3d-48f7-492f-9a87-d44980d980f2
	I0731 12:10:58.706311  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.706319  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.706326  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.706347  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.706364  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.706522  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:58.706983  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:10:58.706998  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.707006  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.707013  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.709449  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.709502  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.709524  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.709546  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.709593  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.709605  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.709612  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.709618  916191 round_trippers.go:580]     Audit-Id: ce44bc57-367e-4d83-8c17-1228f99a232f
	I0731 12:10:58.709745  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"300","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0731 12:10:58.710216  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:58.710229  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:58.710238  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:58.710246  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:58.712617  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:58.712655  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:58.712663  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:58 GMT
	I0731 12:10:58.712670  916191 round_trippers.go:580]     Audit-Id: d45b06a6-a068-4577-a177-7fa6bb398a2e
	I0731 12:10:58.712677  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:58.712684  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:58.712691  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:58.712703  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:58.712827  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:59.213968  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:10:59.213993  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:59.214003  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:59.214011  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:59.220938  916191 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 12:10:59.221012  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:59.221035  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:59.221058  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:59.221097  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:59.221117  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:59.221142  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:59 GMT
	I0731 12:10:59.221176  916191 round_trippers.go:580]     Audit-Id: be12fc06-c901-4d02-824c-5bb5e17fda4e
	I0731 12:10:59.221327  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"300","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0731 12:10:59.221872  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:59.221888  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:59.221897  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:59.221904  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:59.224249  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:59.224271  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:59.224280  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:59.224287  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:59.224294  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:59.224301  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:59.224311  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:59 GMT
	I0731 12:10:59.224322  916191 round_trippers.go:580]     Audit-Id: 2b856139-92dc-43e0-9ece-28f95b8c1379
	I0731 12:10:59.224564  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:10:59.713869  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:10:59.713896  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:59.713906  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:59.713914  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:59.716422  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:59.716448  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:59.716457  916191 round_trippers.go:580]     Audit-Id: 48af360b-29fb-45f2-a3cc-3a69acd1443f
	I0731 12:10:59.716464  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:59.716470  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:59.716477  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:59.716487  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:59.716498  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:59 GMT
	I0731 12:10:59.716669  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"300","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0731 12:10:59.717138  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:10:59.717155  916191 round_trippers.go:469] Request Headers:
	I0731 12:10:59.717164  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:10:59.717175  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:10:59.719410  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:10:59.719431  916191 round_trippers.go:577] Response Headers:
	I0731 12:10:59.719441  916191 round_trippers.go:580]     Audit-Id: c4ebeb04-09fc-46d8-8e19-7b2e55931345
	I0731 12:10:59.719448  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:10:59.719458  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:10:59.719472  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:10:59.719486  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:10:59.719493  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:10:59 GMT
	I0731 12:10:59.724483  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:00.214319  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:11:00.214348  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.214358  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.214367  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.217584  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:11:00.217607  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.217618  916191 round_trippers.go:580]     Audit-Id: 80aefdb4-09da-4d70-8063-0a79a1dd7b63
	I0731 12:11:00.217625  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.217631  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.217638  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.217645  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.217653  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.218002  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"300","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0731 12:11:00.218557  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:00.218587  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.218597  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.218605  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.221435  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:00.221474  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.221484  916191 round_trippers.go:580]     Audit-Id: d9b5f96c-79c2-44c6-bb1d-28f57f4357b6
	I0731 12:11:00.221492  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.221500  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.221507  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.221515  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.221522  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.221840  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:00.713805  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:11:00.713834  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.713845  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.713852  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.716474  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:00.716499  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.716509  916191 round_trippers.go:580]     Audit-Id: 25876be1-c986-402b-bff0-5831984bd5cb
	I0731 12:11:00.716515  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.716522  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.716528  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.716539  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.716547  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.716681  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"421","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0731 12:11:00.717150  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:00.717163  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.717171  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.717182  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.719645  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:00.719670  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.719680  916191 round_trippers.go:580]     Audit-Id: 96f49967-3817-435b-9d72-20edb3e47498
	I0731 12:11:00.719687  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.719693  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.719700  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.719707  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.719718  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.719863  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:00.720268  916191 pod_ready.go:92] pod "etcd-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:00.720288  916191 pod_ready.go:81] duration metric: took 2.019792945s waiting for pod "etcd-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:00.720303  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:00.720364  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-951087
	I0731 12:11:00.720374  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.720382  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.720389  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.724862  916191 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 12:11:00.724889  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.724897  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.724904  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.724911  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.724919  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.724926  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.724933  916191 round_trippers.go:580]     Audit-Id: af8df55e-8569-42dd-82a3-a7b2d0cf96d9
	I0731 12:11:00.725075  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-951087","namespace":"kube-system","uid":"5006315b-a9f0-4c65-a12c-532521088aca","resourceVersion":"422","creationTimestamp":"2023-07-31T12:10:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"2c12830565eebcd51287ac2e207ab987","kubernetes.io/config.mirror":"2c12830565eebcd51287ac2e207ab987","kubernetes.io/config.seen":"2023-07-31T12:10:32.705357367Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0731 12:11:00.725627  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:00.725645  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.725654  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.725662  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.727919  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:00.727950  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.727957  916191 round_trippers.go:580]     Audit-Id: f3c30aff-200c-415c-b982-4f971afac074
	I0731 12:11:00.727964  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.727971  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.727978  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.727984  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.727992  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.728139  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:00.728508  916191 pod_ready.go:92] pod "kube-apiserver-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:00.728517  916191 pod_ready.go:81] duration metric: took 8.202598ms waiting for pod "kube-apiserver-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:00.728527  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:00.728583  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-951087
	I0731 12:11:00.728587  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.728595  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.728601  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.732297  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:11:00.732353  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.732375  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.732397  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.732434  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.732459  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.732481  916191 round_trippers.go:580]     Audit-Id: 75b0e124-7f23-4614-a09c-737c5d9ee2eb
	I0731 12:11:00.732503  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.733019  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-951087","namespace":"kube-system","uid":"aec99526-80d6-49c6-9b47-37c2bc155692","resourceVersion":"423","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"030c32208cdf1c653a4b957eca963c70","kubernetes.io/config.mirror":"030c32208cdf1c653a4b957eca963c70","kubernetes.io/config.seen":"2023-07-31T12:10:40.309381493Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0731 12:11:00.828834  916191 request.go:628] Waited for 95.231735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:00.828886  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:00.828892  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:00.828902  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:00.828920  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:00.831532  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:00.831608  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:00.831633  916191 round_trippers.go:580]     Audit-Id: c4d4bde3-a7d4-4481-9f5f-164e7ce584ff
	I0731 12:11:00.831655  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:00.831685  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:00.831693  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:00.831711  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:00.831727  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:00 GMT
	I0731 12:11:00.831865  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:00.832335  916191 pod_ready.go:92] pod "kube-controller-manager-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:00.832354  916191 pod_ready.go:81] duration metric: took 103.820432ms waiting for pod "kube-controller-manager-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:00.832366  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2ljd" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:01.028808  916191 request.go:628] Waited for 196.337749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2ljd
	I0731 12:11:01.028864  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2ljd
	I0731 12:11:01.028869  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.028879  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.028913  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.031586  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:01.031611  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.031619  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.031627  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.031633  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.031640  916191 round_trippers.go:580]     Audit-Id: 9edb1f14-081c-4102-9f2e-5748a0768b82
	I0731 12:11:01.031647  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.031653  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.031798  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x2ljd","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae696871-fdaa-44a8-8f72-914cf534dd5c","resourceVersion":"388","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0731 12:11:01.228669  916191 request.go:628] Waited for 196.357268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:01.228744  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:01.228749  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.228758  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.228793  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.231657  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:01.231687  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.231697  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.231711  916191 round_trippers.go:580]     Audit-Id: b3aa2390-eeba-4519-a008-5ec50498e278
	I0731 12:11:01.231719  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.231733  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.231777  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.231790  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.231938  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:01.232406  916191 pod_ready.go:92] pod "kube-proxy-x2ljd" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:01.232447  916191 pod_ready.go:81] duration metric: took 400.05482ms waiting for pod "kube-proxy-x2ljd" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:01.232466  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:01.428870  916191 request.go:628] Waited for 196.333474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951087
	I0731 12:11:01.428970  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951087
	I0731 12:11:01.428976  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.428987  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.428995  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.431862  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:01.431887  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.431896  916191 round_trippers.go:580]     Audit-Id: df7d6daf-2951-498c-960c-8ec26101a27d
	I0731 12:11:01.431903  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.431911  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.431917  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.431924  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.431931  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.432363  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-951087","namespace":"kube-system","uid":"6bb8f158-b688-4e46-a49b-5caaafc0516a","resourceVersion":"420","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c0663a64e088b0ec1a92123f2a642643","kubernetes.io/config.mirror":"c0663a64e088b0ec1a92123f2a642643","kubernetes.io/config.seen":"2023-07-31T12:10:40.309382749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0731 12:11:01.628070  916191 request.go:628] Waited for 195.271997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:01.628144  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:01.628161  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.628173  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.628180  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.631023  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:01.631068  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.631078  916191 round_trippers.go:580]     Audit-Id: c96c8d00-f632-4db0-a9a3-3708101a53a4
	I0731 12:11:01.631085  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.631092  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.631099  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.631107  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.631113  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.631230  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:01.631663  916191 pod_ready.go:92] pod "kube-scheduler-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:01.631682  916191 pod_ready.go:81] duration metric: took 399.208744ms waiting for pod "kube-scheduler-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:01.631694  916191 pod_ready.go:38] duration metric: took 3.999589584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:11:01.631730  916191 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:11:01.631797  916191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:11:01.645540  916191 command_runner.go:130] > 1270
	I0731 12:11:01.645579  916191 api_server.go:72] duration metric: took 7.193476683s to wait for apiserver process to appear ...
	I0731 12:11:01.645590  916191 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:11:01.645610  916191 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 12:11:01.654755  916191 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 12:11:01.654823  916191 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0731 12:11:01.654838  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.654848  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.654856  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.656364  916191 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 12:11:01.656408  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.656419  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.656426  916191 round_trippers.go:580]     Audit-Id: 321581cd-8699-4d00-93da-8cd541002315
	I0731 12:11:01.656438  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.656449  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.656456  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.656467  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.656474  916191 round_trippers.go:580]     Content-Length: 263
	I0731 12:11:01.656501  916191 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0731 12:11:01.656599  916191 api_server.go:141] control plane version: v1.27.3
	I0731 12:11:01.656616  916191 api_server.go:131] duration metric: took 11.020418ms to wait for apiserver health ...
	I0731 12:11:01.656624  916191 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 12:11:01.829046  916191 request.go:628] Waited for 172.33312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:11:01.829157  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:11:01.829201  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:01.829218  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:01.829227  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:01.833038  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:11:01.833065  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:01.833076  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:01.833083  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:01 GMT
	I0731 12:11:01.833090  916191 round_trippers.go:580]     Audit-Id: e1eb0647-d56b-4038-a7e5-c243af6675da
	I0731 12:11:01.833096  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:01.833107  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:01.833116  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:01.833843  916191 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"412","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0731 12:11:01.836398  916191 system_pods.go:59] 8 kube-system pods found
	I0731 12:11:01.836433  916191 system_pods.go:61] "coredns-5d78c9869d-nb8rj" [f9dc9fa3-310f-4097-89e6-75625c1e7651] Running
	I0731 12:11:01.836440  916191 system_pods.go:61] "etcd-multinode-951087" [37276bdd-7289-4086-8bb0-b8f832400a26] Running
	I0731 12:11:01.836445  916191 system_pods.go:61] "kindnet-4cjwb" [54bdceae-01af-4821-9cf7-298343953a96] Running
	I0731 12:11:01.836450  916191 system_pods.go:61] "kube-apiserver-multinode-951087" [5006315b-a9f0-4c65-a12c-532521088aca] Running
	I0731 12:11:01.836456  916191 system_pods.go:61] "kube-controller-manager-multinode-951087" [aec99526-80d6-49c6-9b47-37c2bc155692] Running
	I0731 12:11:01.836461  916191 system_pods.go:61] "kube-proxy-x2ljd" [ae696871-fdaa-44a8-8f72-914cf534dd5c] Running
	I0731 12:11:01.836466  916191 system_pods.go:61] "kube-scheduler-multinode-951087" [6bb8f158-b688-4e46-a49b-5caaafc0516a] Running
	I0731 12:11:01.836475  916191 system_pods.go:61] "storage-provisioner" [78fd5833-1fa8-4e9a-8411-0c5880d460a7] Running
	I0731 12:11:01.836481  916191 system_pods.go:74] duration metric: took 179.852233ms to wait for pod list to return data ...
	I0731 12:11:01.836492  916191 default_sa.go:34] waiting for default service account to be created ...
	I0731 12:11:02.028819  916191 request.go:628] Waited for 192.247786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 12:11:02.028896  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 12:11:02.028908  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:02.028918  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:02.028926  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:02.031593  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:02.031618  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:02.031628  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:02.031667  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:02.031681  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:02.031688  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:02.031695  916191 round_trippers.go:580]     Content-Length: 261
	I0731 12:11:02.031707  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:02 GMT
	I0731 12:11:02.031718  916191 round_trippers.go:580]     Audit-Id: 03e34754-c0bf-4721-bb91-0895911ceec5
	I0731 12:11:02.031741  916191 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"885ca1f4-3b79-49b2-96f6-9f7fcdcdfddb","resourceVersion":"320","creationTimestamp":"2023-07-31T12:10:53Z"}}]}
	I0731 12:11:02.031971  916191 default_sa.go:45] found service account: "default"
	I0731 12:11:02.031990  916191 default_sa.go:55] duration metric: took 195.491992ms for default service account to be created ...
	I0731 12:11:02.031999  916191 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 12:11:02.228448  916191 request.go:628] Waited for 196.380874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:11:02.228580  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:11:02.228593  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:02.228603  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:02.228611  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:02.232322  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:11:02.232395  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:02.232411  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:02.232419  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:02.232426  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:02.232433  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:02.232443  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:02 GMT
	I0731 12:11:02.232450  916191 round_trippers.go:580]     Audit-Id: 55e0a374-595d-4e66-8825-27471ad3bf87
	I0731 12:11:02.233549  916191 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"412","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0731 12:11:02.236043  916191 system_pods.go:86] 8 kube-system pods found
	I0731 12:11:02.236074  916191 system_pods.go:89] "coredns-5d78c9869d-nb8rj" [f9dc9fa3-310f-4097-89e6-75625c1e7651] Running
	I0731 12:11:02.236082  916191 system_pods.go:89] "etcd-multinode-951087" [37276bdd-7289-4086-8bb0-b8f832400a26] Running
	I0731 12:11:02.236087  916191 system_pods.go:89] "kindnet-4cjwb" [54bdceae-01af-4821-9cf7-298343953a96] Running
	I0731 12:11:02.236099  916191 system_pods.go:89] "kube-apiserver-multinode-951087" [5006315b-a9f0-4c65-a12c-532521088aca] Running
	I0731 12:11:02.236130  916191 system_pods.go:89] "kube-controller-manager-multinode-951087" [aec99526-80d6-49c6-9b47-37c2bc155692] Running
	I0731 12:11:02.236142  916191 system_pods.go:89] "kube-proxy-x2ljd" [ae696871-fdaa-44a8-8f72-914cf534dd5c] Running
	I0731 12:11:02.236147  916191 system_pods.go:89] "kube-scheduler-multinode-951087" [6bb8f158-b688-4e46-a49b-5caaafc0516a] Running
	I0731 12:11:02.236152  916191 system_pods.go:89] "storage-provisioner" [78fd5833-1fa8-4e9a-8411-0c5880d460a7] Running
	I0731 12:11:02.236159  916191 system_pods.go:126] duration metric: took 204.155593ms to wait for k8s-apps to be running ...
	I0731 12:11:02.236166  916191 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 12:11:02.236226  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:11:02.251394  916191 system_svc.go:56] duration metric: took 15.216023ms WaitForService to wait for kubelet.
	I0731 12:11:02.251468  916191 kubeadm.go:581] duration metric: took 7.799363867s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 12:11:02.251490  916191 node_conditions.go:102] verifying NodePressure condition ...
	I0731 12:11:02.428910  916191 request.go:628] Waited for 177.307332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0731 12:11:02.428966  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0731 12:11:02.428972  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:02.428981  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:02.428992  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:02.431707  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:02.431731  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:02.431739  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:02.431746  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:02.431753  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:02.431759  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:02.431766  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:02 GMT
	I0731 12:11:02.431777  916191 round_trippers.go:580]     Audit-Id: 2f14d5a1-c9dc-4244-be63-0caa4764c992
	I0731 12:11:02.431870  916191 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0731 12:11:02.432330  916191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:11:02.432356  916191 node_conditions.go:123] node cpu capacity is 2
	I0731 12:11:02.432369  916191 node_conditions.go:105] duration metric: took 180.871389ms to run NodePressure ...
	I0731 12:11:02.432414  916191 start.go:228] waiting for startup goroutines ...
	I0731 12:11:02.432428  916191 start.go:233] waiting for cluster config update ...
	I0731 12:11:02.432439  916191 start.go:242] writing updated cluster config ...
	I0731 12:11:02.434857  916191 out.go:177] 
	I0731 12:11:02.436753  916191 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:11:02.436856  916191 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json ...
	I0731 12:11:02.439210  916191 out.go:177] * Starting worker node multinode-951087-m02 in cluster multinode-951087
	I0731 12:11:02.441113  916191 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:11:02.443316  916191 out.go:177] * Pulling base image ...
	I0731 12:11:02.445545  916191 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:11:02.445575  916191 cache.go:57] Caching tarball of preloaded images
	I0731 12:11:02.445629  916191 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 12:11:02.445674  916191 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:02.445687  916191 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 12:11:02.445802  916191 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json ...
	I0731 12:11:02.463149  916191 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 12:11:02.463173  916191 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 12:11:02.463198  916191 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:11:02.463226  916191 start.go:365] acquiring machines lock for multinode-951087-m02: {Name:mk8f6fb352b633f443c5aa384236d4d75d526121 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:02.463353  916191 start.go:369] acquired machines lock for "multinode-951087-m02" in 109.054µs
	I0731 12:11:02.463381  916191 start.go:93] Provisioning new machine with config: &{Name:multinode-951087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 12:11:02.463472  916191 start.go:125] createHost starting for "m02" (driver="docker")
	I0731 12:11:02.466088  916191 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 12:11:02.466205  916191 start.go:159] libmachine.API.Create for "multinode-951087" (driver="docker")
	I0731 12:11:02.466226  916191 client.go:168] LocalClient.Create starting
	I0731 12:11:02.466535  916191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 12:11:02.466593  916191 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:02.466610  916191 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:02.466686  916191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 12:11:02.466717  916191 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:02.466729  916191 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:02.467014  916191 cli_runner.go:164] Run: docker network inspect multinode-951087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:11:02.486963  916191 network_create.go:76] Found existing network {name:multinode-951087 subnet:0x4001578090 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0731 12:11:02.487004  916191 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-951087-m02" container
	I0731 12:11:02.487100  916191 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 12:11:02.504257  916191 cli_runner.go:164] Run: docker volume create multinode-951087-m02 --label name.minikube.sigs.k8s.io=multinode-951087-m02 --label created_by.minikube.sigs.k8s.io=true
	I0731 12:11:02.523508  916191 oci.go:103] Successfully created a docker volume multinode-951087-m02
	I0731 12:11:02.523602  916191 cli_runner.go:164] Run: docker run --rm --name multinode-951087-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951087-m02 --entrypoint /usr/bin/test -v multinode-951087-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 12:11:03.176512  916191 oci.go:107] Successfully prepared a docker volume multinode-951087-m02
	I0731 12:11:03.176553  916191 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:11:03.176575  916191 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 12:11:03.176672  916191 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951087-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 12:11:07.363741  916191 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951087-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.187015106s)
	I0731 12:11:07.363775  916191 kic.go:199] duration metric: took 4.187196 seconds to extract preloaded images to volume
	W0731 12:11:07.363928  916191 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 12:11:07.364051  916191 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 12:11:07.427751  916191 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-951087-m02 --name multinode-951087-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951087-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-951087-m02 --network multinode-951087 --ip 192.168.58.3 --volume multinode-951087-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 12:11:07.771090  916191 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Running}}
	I0731 12:11:07.794526  916191 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Status}}
	I0731 12:11:07.822657  916191 cli_runner.go:164] Run: docker exec multinode-951087-m02 stat /var/lib/dpkg/alternatives/iptables
	I0731 12:11:07.917727  916191 oci.go:144] the created container "multinode-951087-m02" has a running status.
	I0731 12:11:07.917753  916191 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa...
	I0731 12:11:08.890657  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 12:11:08.890708  916191 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 12:11:08.925845  916191 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Status}}
	I0731 12:11:08.952608  916191 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 12:11:08.952628  916191 kic_runner.go:114] Args: [docker exec --privileged multinode-951087-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 12:11:09.052997  916191 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Status}}
	I0731 12:11:09.088458  916191 machine.go:88] provisioning docker machine ...
	I0731 12:11:09.088485  916191 ubuntu.go:169] provisioning hostname "multinode-951087-m02"
	I0731 12:11:09.088550  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:09.115506  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:11:09.115950  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35921 <nil> <nil>}
	I0731 12:11:09.115962  916191 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-951087-m02 && echo "multinode-951087-m02" | sudo tee /etc/hostname
	I0731 12:11:09.268728  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-951087-m02
	
	I0731 12:11:09.268806  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:09.294070  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:11:09.294506  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35921 <nil> <nil>}
	I0731 12:11:09.294531  916191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-951087-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-951087-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-951087-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:11:09.429492  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:11:09.429519  916191 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:11:09.429539  916191 ubuntu.go:177] setting up certificates
	I0731 12:11:09.429547  916191 provision.go:83] configureAuth start
	I0731 12:11:09.429618  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087-m02
	I0731 12:11:09.452816  916191 provision.go:138] copyHostCerts
	I0731 12:11:09.452856  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:11:09.452888  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:11:09.452896  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:11:09.452974  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:11:09.453054  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:11:09.453078  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:11:09.453082  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:11:09.453108  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:11:09.453146  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:11:09.453169  916191 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:11:09.453173  916191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:11:09.453198  916191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:11:09.453247  916191 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.multinode-951087-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-951087-m02]
	I0731 12:11:09.587357  916191 provision.go:172] copyRemoteCerts
	I0731 12:11:09.587426  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:11:09.587470  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:09.606294  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:11:09.703938  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 12:11:09.704031  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:11:09.737561  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 12:11:09.737625  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:11:09.768682  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 12:11:09.768791  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0731 12:11:09.800005  916191 provision.go:86] duration metric: configureAuth took 370.443081ms
	I0731 12:11:09.800029  916191 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:11:09.800246  916191 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:11:09.800355  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:09.819256  916191 main.go:141] libmachine: Using SSH client type: native
	I0731 12:11:09.819705  916191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 35921 <nil> <nil>}
	I0731 12:11:09.819730  916191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:11:10.099552  916191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:11:10.099579  916191 machine.go:91] provisioned docker machine in 1.011103055s
	I0731 12:11:10.099590  916191 client.go:171] LocalClient.Create took 7.63335393s
	I0731 12:11:10.099602  916191 start.go:167] duration metric: libmachine.API.Create for "multinode-951087" took 7.633399033s
	I0731 12:11:10.099611  916191 start.go:300] post-start starting for "multinode-951087-m02" (driver="docker")
	I0731 12:11:10.099627  916191 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:11:10.099698  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:11:10.099758  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:10.122017  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:11:10.220071  916191 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:11:10.224381  916191 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0731 12:11:10.224446  916191 command_runner.go:130] > NAME="Ubuntu"
	I0731 12:11:10.224467  916191 command_runner.go:130] > VERSION_ID="22.04"
	I0731 12:11:10.224497  916191 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0731 12:11:10.224521  916191 command_runner.go:130] > VERSION_CODENAME=jammy
	I0731 12:11:10.224532  916191 command_runner.go:130] > ID=ubuntu
	I0731 12:11:10.224538  916191 command_runner.go:130] > ID_LIKE=debian
	I0731 12:11:10.224546  916191 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0731 12:11:10.224556  916191 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0731 12:11:10.224564  916191 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0731 12:11:10.224573  916191 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0731 12:11:10.224607  916191 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0731 12:11:10.224668  916191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:11:10.224707  916191 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:11:10.224723  916191 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:11:10.224730  916191 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 12:11:10.224743  916191 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:11:10.224808  916191 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:11:10.224889  916191 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:11:10.224899  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /etc/ssl/certs/8525502.pem
	I0731 12:11:10.225004  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:11:10.235949  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:11:10.266946  916191 start.go:303] post-start completed in 167.31644ms
	I0731 12:11:10.267423  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087-m02
	I0731 12:11:10.285987  916191 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/config.json ...
	I0731 12:11:10.286276  916191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:11:10.286319  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:10.304856  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:11:10.398278  916191 command_runner.go:130] > 16%!
	(MISSING)I0731 12:11:10.398365  916191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:11:10.404826  916191 command_runner.go:130] > 164G
	I0731 12:11:10.404856  916191 start.go:128] duration metric: createHost completed in 7.941375725s
	I0731 12:11:10.404866  916191 start.go:83] releasing machines lock for "multinode-951087-m02", held for 7.941504192s
	I0731 12:11:10.404939  916191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087-m02
	I0731 12:11:10.425560  916191 out.go:177] * Found network options:
	I0731 12:11:10.427485  916191 out.go:177]   - NO_PROXY=192.168.58.2
	W0731 12:11:10.429300  916191 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 12:11:10.429353  916191 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 12:11:10.429424  916191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:11:10.429471  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:10.429492  916191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:11:10.429576  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:11:10.452447  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:11:10.453558  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:11:10.702044  916191 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 12:11:10.743111  916191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:11:10.748716  916191 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0731 12:11:10.748738  916191 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0731 12:11:10.748756  916191 command_runner.go:130] > Device: b3h/179d	Inode: 5967833     Links: 1
	I0731 12:11:10.748764  916191 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:11:10.748773  916191 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0731 12:11:10.748779  916191 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0731 12:11:10.748785  916191 command_runner.go:130] > Change: 2023-07-31 11:47:51.445665001 +0000
	I0731 12:11:10.748791  916191 command_runner.go:130] >  Birth: 2023-07-31 11:47:51.445665001 +0000
	I0731 12:11:10.749372  916191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:11:10.774914  916191 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:11:10.774992  916191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:11:10.812258  916191 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0731 12:11:10.812341  916191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 12:11:10.812356  916191 start.go:466] detecting cgroup driver to use...
	I0731 12:11:10.812407  916191 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:11:10.812469  916191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:11:10.832500  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:11:10.846438  916191 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:11:10.846527  916191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:11:10.863238  916191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:11:10.881152  916191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 12:11:10.990144  916191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:11:11.103817  916191 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0731 12:11:11.103899  916191 docker.go:212] disabling docker service ...
	I0731 12:11:11.103989  916191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:11:11.128572  916191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:11:11.144289  916191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:11:11.242743  916191 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0731 12:11:11.242823  916191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:11:11.353501  916191 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0731 12:11:11.353582  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:11:11.368286  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:11:11.387461  916191 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 12:11:11.389117  916191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 12:11:11.389181  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:11:11.403282  916191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 12:11:11.403355  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:11:11.416235  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:11:11.431048  916191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:11:11.444412  916191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:11:11.456829  916191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:11:11.466234  916191 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 12:11:11.467208  916191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:11:11.477843  916191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:11:11.582089  916191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 12:11:11.731218  916191 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 12:11:11.731292  916191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 12:11:11.737527  916191 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 12:11:11.737550  916191 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 12:11:11.737558  916191 command_runner.go:130] > Device: bdh/189d	Inode: 186         Links: 1
	I0731 12:11:11.737566  916191 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:11:11.737572  916191 command_runner.go:130] > Access: 2023-07-31 12:11:11.704307401 +0000
	I0731 12:11:11.737579  916191 command_runner.go:130] > Modify: 2023-07-31 12:11:11.704307401 +0000
	I0731 12:11:11.737586  916191 command_runner.go:130] > Change: 2023-07-31 12:11:11.704307401 +0000
	I0731 12:11:11.737590  916191 command_runner.go:130] >  Birth: -
	I0731 12:11:11.737732  916191 start.go:534] Will wait 60s for crictl version
	I0731 12:11:11.737819  916191 ssh_runner.go:195] Run: which crictl
	I0731 12:11:11.742499  916191 command_runner.go:130] > /usr/bin/crictl
	I0731 12:11:11.742859  916191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:11:11.789206  916191 command_runner.go:130] > Version:  0.1.0
	I0731 12:11:11.789226  916191 command_runner.go:130] > RuntimeName:  cri-o
	I0731 12:11:11.789231  916191 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0731 12:11:11.789238  916191 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 12:11:11.792198  916191 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 12:11:11.792290  916191 ssh_runner.go:195] Run: crio --version
	I0731 12:11:11.835695  916191 command_runner.go:130] > crio version 1.24.6
	I0731 12:11:11.835763  916191 command_runner.go:130] > Version:          1.24.6
	I0731 12:11:11.835785  916191 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 12:11:11.835812  916191 command_runner.go:130] > GitTreeState:     clean
	I0731 12:11:11.835847  916191 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 12:11:11.835872  916191 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 12:11:11.835891  916191 command_runner.go:130] > Compiler:         gc
	I0731 12:11:11.835928  916191 command_runner.go:130] > Platform:         linux/arm64
	I0731 12:11:11.835952  916191 command_runner.go:130] > Linkmode:         dynamic
	I0731 12:11:11.835974  916191 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 12:11:11.836009  916191 command_runner.go:130] > SeccompEnabled:   true
	I0731 12:11:11.836033  916191 command_runner.go:130] > AppArmorEnabled:  false
	I0731 12:11:11.837426  916191 ssh_runner.go:195] Run: crio --version
	I0731 12:11:11.879868  916191 command_runner.go:130] > crio version 1.24.6
	I0731 12:11:11.879940  916191 command_runner.go:130] > Version:          1.24.6
	I0731 12:11:11.879964  916191 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 12:11:11.879984  916191 command_runner.go:130] > GitTreeState:     clean
	I0731 12:11:11.880020  916191 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 12:11:11.880045  916191 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 12:11:11.880066  916191 command_runner.go:130] > Compiler:         gc
	I0731 12:11:11.880098  916191 command_runner.go:130] > Platform:         linux/arm64
	I0731 12:11:11.880135  916191 command_runner.go:130] > Linkmode:         dynamic
	I0731 12:11:11.880174  916191 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 12:11:11.880194  916191 command_runner.go:130] > SeccompEnabled:   true
	I0731 12:11:11.880214  916191 command_runner.go:130] > AppArmorEnabled:  false
	I0731 12:11:11.884280  916191 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 12:11:11.886066  916191 out.go:177]   - env NO_PROXY=192.168.58.2
	I0731 12:11:11.887670  916191 cli_runner.go:164] Run: docker network inspect multinode-951087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:11:11.908192  916191 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0731 12:11:11.913867  916191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:11:11.928067  916191 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087 for IP: 192.168.58.3
	I0731 12:11:11.928131  916191 certs.go:190] acquiring lock for shared ca certs: {Name:mk762e840a818dea6b5e9edfaa8822eb28411d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:11.928271  916191 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key
	I0731 12:11:11.928324  916191 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key
	I0731 12:11:11.928339  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 12:11:11.928353  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 12:11:11.928364  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 12:11:11.928375  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 12:11:11.928430  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem (1338 bytes)
	W0731 12:11:11.928476  916191 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550_empty.pem, impossibly tiny 0 bytes
	I0731 12:11:11.928490  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:11:11.928522  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:11:11.928550  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:11:11.928577  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem (1679 bytes)
	I0731 12:11:11.928630  916191 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:11:11.928661  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /usr/share/ca-certificates/8525502.pem
	I0731 12:11:11.928677  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:11:11.928690  916191 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem -> /usr/share/ca-certificates/852550.pem
	I0731 12:11:11.929096  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:11:11.959012  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:11:11.990186  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:11:12.023205  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:11:12.055571  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /usr/share/ca-certificates/8525502.pem (1708 bytes)
	I0731 12:11:12.088155  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:11:12.120190  916191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem --> /usr/share/ca-certificates/852550.pem (1338 bytes)
	I0731 12:11:12.152067  916191 ssh_runner.go:195] Run: openssl version
	I0731 12:11:12.158834  916191 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0731 12:11:12.159173  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:11:12.171028  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:11:12.175698  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:11:12.175781  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:11:12.175861  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:11:12.184470  916191 command_runner.go:130] > b5213941
	I0731 12:11:12.184555  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:11:12.196471  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/852550.pem && ln -fs /usr/share/ca-certificates/852550.pem /etc/ssl/certs/852550.pem"
	I0731 12:11:12.215618  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/852550.pem
	I0731 12:11:12.220470  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 12:11:12.220502  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 12:11:12.220552  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/852550.pem
	I0731 12:11:12.229022  916191 command_runner.go:130] > 51391683
	I0731 12:11:12.229470  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/852550.pem /etc/ssl/certs/51391683.0"
	I0731 12:11:12.241058  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8525502.pem && ln -fs /usr/share/ca-certificates/8525502.pem /etc/ssl/certs/8525502.pem"
	I0731 12:11:12.252716  916191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8525502.pem
	I0731 12:11:12.257501  916191 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 12:11:12.257603  916191 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 12:11:12.257679  916191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8525502.pem
	I0731 12:11:12.265990  916191 command_runner.go:130] > 3ec20f2e
	I0731 12:11:12.266427  916191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8525502.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:11:12.278252  916191 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 12:11:12.282692  916191 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 12:11:12.282784  916191 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 12:11:12.282926  916191 ssh_runner.go:195] Run: crio config
	I0731 12:11:12.335515  916191 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 12:11:12.335582  916191 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 12:11:12.335604  916191 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 12:11:12.335624  916191 command_runner.go:130] > #
	I0731 12:11:12.335660  916191 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 12:11:12.335693  916191 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 12:11:12.335716  916191 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 12:11:12.335742  916191 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 12:11:12.335772  916191 command_runner.go:130] > # reload'.
	I0731 12:11:12.335801  916191 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 12:11:12.335824  916191 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 12:11:12.335848  916191 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 12:11:12.335881  916191 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 12:11:12.335907  916191 command_runner.go:130] > [crio]
	I0731 12:11:12.335929  916191 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 12:11:12.335951  916191 command_runner.go:130] > # containers images, in this directory.
	I0731 12:11:12.335984  916191 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0731 12:11:12.336006  916191 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 12:11:12.336025  916191 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0731 12:11:12.336048  916191 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 12:11:12.336082  916191 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 12:11:12.336103  916191 command_runner.go:130] > # storage_driver = "vfs"
	I0731 12:11:12.336156  916191 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 12:11:12.336194  916191 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 12:11:12.336229  916191 command_runner.go:130] > # storage_option = [
	I0731 12:11:12.336248  916191 command_runner.go:130] > # ]
	I0731 12:11:12.336270  916191 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 12:11:12.336300  916191 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 12:11:12.336328  916191 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 12:11:12.336353  916191 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 12:11:12.336376  916191 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 12:11:12.336410  916191 command_runner.go:130] > # always happen on a node reboot
	I0731 12:11:12.336435  916191 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 12:11:12.336465  916191 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 12:11:12.336486  916191 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 12:11:12.336522  916191 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 12:11:12.336548  916191 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0731 12:11:12.336588  916191 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 12:11:12.336622  916191 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 12:11:12.336660  916191 command_runner.go:130] > # internal_wipe = true
	I0731 12:11:12.336690  916191 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 12:11:12.336733  916191 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 12:11:12.336763  916191 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 12:11:12.336896  916191 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 12:11:12.336935  916191 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 12:11:12.336958  916191 command_runner.go:130] > [crio.api]
	I0731 12:11:12.336983  916191 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 12:11:12.337017  916191 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 12:11:12.337042  916191 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 12:11:12.337062  916191 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 12:11:12.337088  916191 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 12:11:12.337118  916191 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 12:11:12.337139  916191 command_runner.go:130] > # stream_port = "0"
	I0731 12:11:12.337158  916191 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 12:11:12.337176  916191 command_runner.go:130] > # stream_enable_tls = false
	I0731 12:11:12.337198  916191 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 12:11:12.337232  916191 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 12:11:12.337259  916191 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 12:11:12.337279  916191 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 12:11:12.337304  916191 command_runner.go:130] > # minutes.
	I0731 12:11:12.337333  916191 command_runner.go:130] > # stream_tls_cert = ""
	I0731 12:11:12.337357  916191 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 12:11:12.337379  916191 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 12:11:12.337407  916191 command_runner.go:130] > # stream_tls_key = ""
	I0731 12:11:12.337436  916191 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 12:11:12.337475  916191 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 12:11:12.337497  916191 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 12:11:12.337514  916191 command_runner.go:130] > # stream_tls_ca = ""
	I0731 12:11:12.337546  916191 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 12:11:12.337585  916191 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0731 12:11:12.337608  916191 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 12:11:12.337632  916191 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0731 12:11:12.337691  916191 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 12:11:12.337721  916191 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 12:11:12.337741  916191 command_runner.go:130] > [crio.runtime]
	I0731 12:11:12.337769  916191 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 12:11:12.337797  916191 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 12:11:12.337826  916191 command_runner.go:130] > # "nofile=1024:2048"
	I0731 12:11:12.337848  916191 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 12:11:12.337867  916191 command_runner.go:130] > # default_ulimits = [
	I0731 12:11:12.337895  916191 command_runner.go:130] > # ]
	I0731 12:11:12.337919  916191 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 12:11:12.337938  916191 command_runner.go:130] > # no_pivot = false
	I0731 12:11:12.337968  916191 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 12:11:12.338003  916191 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 12:11:12.338028  916191 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 12:11:12.338058  916191 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 12:11:12.338079  916191 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 12:11:12.338112  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 12:11:12.338148  916191 command_runner.go:130] > # conmon = ""
	I0731 12:11:12.338170  916191 command_runner.go:130] > # Cgroup setting for conmon
	I0731 12:11:12.338191  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 12:11:12.338225  916191 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 12:11:12.338252  916191 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 12:11:12.338278  916191 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 12:11:12.338306  916191 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 12:11:12.338336  916191 command_runner.go:130] > # conmon_env = [
	I0731 12:11:12.338357  916191 command_runner.go:130] > # ]
	I0731 12:11:12.338379  916191 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 12:11:12.338430  916191 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 12:11:12.338457  916191 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 12:11:12.338495  916191 command_runner.go:130] > # default_env = [
	I0731 12:11:12.338534  916191 command_runner.go:130] > # ]
	I0731 12:11:12.338557  916191 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 12:11:12.338575  916191 command_runner.go:130] > # selinux = false
	I0731 12:11:12.338609  916191 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 12:11:12.338647  916191 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 12:11:12.338675  916191 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 12:11:12.338695  916191 command_runner.go:130] > # seccomp_profile = ""
	I0731 12:11:12.338731  916191 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 12:11:12.338754  916191 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 12:11:12.338777  916191 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 12:11:12.338812  916191 command_runner.go:130] > # which might increase security.
	I0731 12:11:12.338838  916191 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0731 12:11:12.338860  916191 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 12:11:12.338892  916191 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 12:11:12.338917  916191 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 12:11:12.338940  916191 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 12:11:12.338978  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:11:12.338997  916191 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 12:11:12.339018  916191 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 12:11:12.339054  916191 command_runner.go:130] > # the cgroup blockio controller.
	I0731 12:11:12.339131  916191 command_runner.go:130] > # blockio_config_file = ""
	I0731 12:11:12.339163  916191 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 12:11:12.339182  916191 command_runner.go:130] > # irqbalance daemon.
	I0731 12:11:12.339317  916191 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 12:11:12.339376  916191 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 12:11:12.339406  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:11:12.339444  916191 command_runner.go:130] > # rdt_config_file = ""
	I0731 12:11:12.339464  916191 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 12:11:12.339497  916191 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 12:11:12.339545  916191 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 12:11:12.339570  916191 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 12:11:12.339591  916191 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 12:11:12.339629  916191 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 12:11:12.339647  916191 command_runner.go:130] > # will be added.
	I0731 12:11:12.339666  916191 command_runner.go:130] > # default_capabilities = [
	I0731 12:11:12.339695  916191 command_runner.go:130] > # 	"CHOWN",
	I0731 12:11:12.339720  916191 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 12:11:12.339740  916191 command_runner.go:130] > # 	"FSETID",
	I0731 12:11:12.339760  916191 command_runner.go:130] > # 	"FOWNER",
	I0731 12:11:12.339788  916191 command_runner.go:130] > # 	"SETGID",
	I0731 12:11:12.339814  916191 command_runner.go:130] > # 	"SETUID",
	I0731 12:11:12.339833  916191 command_runner.go:130] > # 	"SETPCAP",
	I0731 12:11:12.339852  916191 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 12:11:12.339881  916191 command_runner.go:130] > # 	"KILL",
	I0731 12:11:12.339917  916191 command_runner.go:130] > # ]
	I0731 12:11:12.339956  916191 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 12:11:12.339989  916191 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 12:11:12.340019  916191 command_runner.go:130] > # add_inheritable_capabilities = true
	I0731 12:11:12.340040  916191 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 12:11:12.340062  916191 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 12:11:12.340091  916191 command_runner.go:130] > # default_sysctls = [
	I0731 12:11:12.340133  916191 command_runner.go:130] > # ]
	I0731 12:11:12.340161  916191 command_runner.go:130] > # List of devices on the host that a
	I0731 12:11:12.340183  916191 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 12:11:12.340203  916191 command_runner.go:130] > # allowed_devices = [
	I0731 12:11:12.340232  916191 command_runner.go:130] > # 	"/dev/fuse",
	I0731 12:11:12.340260  916191 command_runner.go:130] > # ]
	I0731 12:11:12.340283  916191 command_runner.go:130] > # List of additional devices. specified as
	I0731 12:11:12.340341  916191 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 12:11:12.340372  916191 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 12:11:12.340396  916191 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 12:11:12.340415  916191 command_runner.go:130] > # additional_devices = [
	I0731 12:11:12.340444  916191 command_runner.go:130] > # ]
	I0731 12:11:12.340471  916191 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 12:11:12.340492  916191 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 12:11:12.340533  916191 command_runner.go:130] > # 	"/etc/cdi",
	I0731 12:11:12.340683  916191 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 12:11:12.340702  916191 command_runner.go:130] > # ]
	I0731 12:11:12.340725  916191 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 12:11:12.340761  916191 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 12:11:12.340790  916191 command_runner.go:130] > # Defaults to false.
	I0731 12:11:12.340810  916191 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 12:11:12.340849  916191 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 12:11:12.340880  916191 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 12:11:12.340919  916191 command_runner.go:130] > # hooks_dir = [
	I0731 12:11:12.340947  916191 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 12:11:12.340964  916191 command_runner.go:130] > # ]
	I0731 12:11:12.340985  916191 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 12:11:12.341018  916191 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 12:11:12.341046  916191 command_runner.go:130] > # its default mounts from the following two files:
	I0731 12:11:12.341064  916191 command_runner.go:130] > #
	I0731 12:11:12.341087  916191 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 12:11:12.341123  916191 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 12:11:12.341163  916191 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 12:11:12.341181  916191 command_runner.go:130] > #
	I0731 12:11:12.341203  916191 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 12:11:12.341236  916191 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 12:11:12.341259  916191 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 12:11:12.341280  916191 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 12:11:12.341309  916191 command_runner.go:130] > #
	I0731 12:11:12.341331  916191 command_runner.go:130] > # default_mounts_file = ""
	I0731 12:11:12.341354  916191 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 12:11:12.341386  916191 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 12:11:12.341407  916191 command_runner.go:130] > # pids_limit = 0
	I0731 12:11:12.341431  916191 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 12:11:12.341463  916191 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 12:11:12.341495  916191 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 12:11:12.341518  916191 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 12:11:12.341554  916191 command_runner.go:130] > # log_size_max = -1
	I0731 12:11:12.341575  916191 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0731 12:11:12.341596  916191 command_runner.go:130] > # log_to_journald = false
	I0731 12:11:12.341634  916191 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 12:11:12.341662  916191 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 12:11:12.341681  916191 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 12:11:12.341701  916191 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 12:11:12.341732  916191 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 12:11:12.341758  916191 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 12:11:12.341780  916191 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 12:11:12.341799  916191 command_runner.go:130] > # read_only = false
	I0731 12:11:12.341883  916191 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 12:11:12.341915  916191 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 12:11:12.341933  916191 command_runner.go:130] > # live configuration reload.
	I0731 12:11:12.341953  916191 command_runner.go:130] > # log_level = "info"
	I0731 12:11:12.341984  916191 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 12:11:12.342010  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:11:12.342028  916191 command_runner.go:130] > # log_filter = ""
	I0731 12:11:12.342090  916191 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 12:11:12.342111  916191 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 12:11:12.342131  916191 command_runner.go:130] > # separated by comma.
	I0731 12:11:12.342166  916191 command_runner.go:130] > # uid_mappings = ""
	I0731 12:11:12.342195  916191 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 12:11:12.342216  916191 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 12:11:12.342237  916191 command_runner.go:130] > # separated by comma.
	I0731 12:11:12.342266  916191 command_runner.go:130] > # gid_mappings = ""
	I0731 12:11:12.342298  916191 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 12:11:12.342321  916191 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 12:11:12.342343  916191 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 12:11:12.342379  916191 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 12:11:12.342404  916191 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 12:11:12.342427  916191 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 12:11:12.342460  916191 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 12:11:12.342481  916191 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 12:11:12.342505  916191 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 12:11:12.342539  916191 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 12:11:12.342563  916191 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 12:11:12.342583  916191 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 12:11:12.342622  916191 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 12:11:12.342651  916191 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 12:11:12.342672  916191 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 12:11:12.342707  916191 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 12:11:12.342726  916191 command_runner.go:130] > # drop_infra_ctr = true
	I0731 12:11:12.342748  916191 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 12:11:12.342782  916191 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 12:11:12.342813  916191 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 12:11:12.342833  916191 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 12:11:12.342855  916191 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 12:11:12.342884  916191 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 12:11:12.342906  916191 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 12:11:12.342930  916191 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 12:11:12.342960  916191 command_runner.go:130] > # pinns_path = ""
	I0731 12:11:12.342969  916191 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 12:11:12.342977  916191 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0731 12:11:12.342985  916191 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0731 12:11:12.342990  916191 command_runner.go:130] > # default_runtime = "runc"
	I0731 12:11:12.342997  916191 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 12:11:12.343010  916191 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 12:11:12.343025  916191 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0731 12:11:12.343046  916191 command_runner.go:130] > # creation as a file is not desired either.
	I0731 12:11:12.343056  916191 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 12:11:12.343065  916191 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 12:11:12.343071  916191 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 12:11:12.343075  916191 command_runner.go:130] > # ]
	I0731 12:11:12.343083  916191 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 12:11:12.343094  916191 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 12:11:12.343103  916191 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0731 12:11:12.343114  916191 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0731 12:11:12.343118  916191 command_runner.go:130] > #
	I0731 12:11:12.343127  916191 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0731 12:11:12.343133  916191 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0731 12:11:12.343140  916191 command_runner.go:130] > #  runtime_type = "oci"
	I0731 12:11:12.343146  916191 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0731 12:11:12.343152  916191 command_runner.go:130] > #  privileged_without_host_devices = false
	I0731 12:11:12.343157  916191 command_runner.go:130] > #  allowed_annotations = []
	I0731 12:11:12.343169  916191 command_runner.go:130] > # Where:
	I0731 12:11:12.343180  916191 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0731 12:11:12.343192  916191 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0731 12:11:12.343201  916191 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 12:11:12.343211  916191 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 12:11:12.343216  916191 command_runner.go:130] > #   in $PATH.
	I0731 12:11:12.343226  916191 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0731 12:11:12.343232  916191 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 12:11:12.343240  916191 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0731 12:11:12.343244  916191 command_runner.go:130] > #   state.
	I0731 12:11:12.343260  916191 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 12:11:12.343267  916191 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 12:11:12.343277  916191 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 12:11:12.343284  916191 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 12:11:12.343295  916191 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 12:11:12.343304  916191 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 12:11:12.343313  916191 command_runner.go:130] > #   The currently recognized values are:
	I0731 12:11:12.343320  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 12:11:12.343331  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 12:11:12.343342  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 12:11:12.343349  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 12:11:12.343361  916191 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 12:11:12.343370  916191 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 12:11:12.343380  916191 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 12:11:12.343389  916191 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0731 12:11:12.343398  916191 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 12:11:12.343403  916191 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 12:11:12.343410  916191 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0731 12:11:12.343415  916191 command_runner.go:130] > runtime_type = "oci"
	I0731 12:11:12.343423  916191 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 12:11:12.343428  916191 command_runner.go:130] > runtime_config_path = ""
	I0731 12:11:12.343433  916191 command_runner.go:130] > monitor_path = ""
	I0731 12:11:12.343441  916191 command_runner.go:130] > monitor_cgroup = ""
	I0731 12:11:12.343446  916191 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 12:11:12.343505  916191 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0731 12:11:12.343515  916191 command_runner.go:130] > # running containers
	I0731 12:11:12.343522  916191 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0731 12:11:12.343531  916191 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0731 12:11:12.343543  916191 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0731 12:11:12.343550  916191 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0731 12:11:12.343560  916191 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0731 12:11:12.343566  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0731 12:11:12.343574  916191 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0731 12:11:12.343579  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0731 12:11:12.343589  916191 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0731 12:11:12.343597  916191 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0731 12:11:12.343605  916191 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 12:11:12.343611  916191 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 12:11:12.343619  916191 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 12:11:12.343631  916191 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 12:11:12.343644  916191 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 12:11:12.343654  916191 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 12:11:12.343665  916191 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 12:11:12.343678  916191 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 12:11:12.343688  916191 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 12:11:12.343697  916191 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 12:11:12.343705  916191 command_runner.go:130] > # Example:
	I0731 12:11:12.343711  916191 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 12:11:12.343721  916191 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 12:11:12.343727  916191 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 12:11:12.343735  916191 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 12:11:12.343740  916191 command_runner.go:130] > # cpuset = 0
	I0731 12:11:12.343744  916191 command_runner.go:130] > # cpushares = "0-1"
	I0731 12:11:12.343751  916191 command_runner.go:130] > # Where:
	I0731 12:11:12.343757  916191 command_runner.go:130] > # The workload name is workload-type.
	I0731 12:11:12.343766  916191 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 12:11:12.343776  916191 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 12:11:12.343783  916191 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 12:11:12.343795  916191 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 12:11:12.343807  916191 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 12:11:12.343814  916191 command_runner.go:130] > # 
	I0731 12:11:12.343822  916191 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 12:11:12.343831  916191 command_runner.go:130] > #
	I0731 12:11:12.343839  916191 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 12:11:12.343850  916191 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 12:11:12.343858  916191 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 12:11:12.343866  916191 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 12:11:12.343881  916191 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 12:11:12.343889  916191 command_runner.go:130] > [crio.image]
	I0731 12:11:12.343897  916191 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 12:11:12.343905  916191 command_runner.go:130] > # default_transport = "docker://"
	I0731 12:11:12.343912  916191 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 12:11:12.343923  916191 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 12:11:12.343928  916191 command_runner.go:130] > # global_auth_file = ""
	I0731 12:11:12.343941  916191 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 12:11:12.343948  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:11:12.343954  916191 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0731 12:11:12.343964  916191 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 12:11:12.343971  916191 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 12:11:12.343981  916191 command_runner.go:130] > # This option supports live configuration reload.
	I0731 12:11:12.343987  916191 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 12:11:12.343997  916191 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 12:11:12.344005  916191 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 12:11:12.344015  916191 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 12:11:12.344022  916191 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 12:11:12.344028  916191 command_runner.go:130] > # pause_command = "/pause"
	I0731 12:11:12.344038  916191 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 12:11:12.344045  916191 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 12:11:12.344056  916191 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 12:11:12.344065  916191 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 12:11:12.344075  916191 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 12:11:12.344080  916191 command_runner.go:130] > # signature_policy = ""
	I0731 12:11:12.344090  916191 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 12:11:12.344098  916191 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 12:11:12.344103  916191 command_runner.go:130] > # changing them here.
	I0731 12:11:12.344130  916191 command_runner.go:130] > # insecure_registries = [
	I0731 12:11:12.344135  916191 command_runner.go:130] > # ]
	I0731 12:11:12.344142  916191 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 12:11:12.344153  916191 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 12:11:12.344162  916191 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 12:11:12.344168  916191 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 12:11:12.344173  916191 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 12:11:12.344181  916191 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 12:11:12.344186  916191 command_runner.go:130] > # CNI plugins.
	I0731 12:11:12.344193  916191 command_runner.go:130] > [crio.network]
	I0731 12:11:12.344201  916191 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 12:11:12.344210  916191 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 12:11:12.344215  916191 command_runner.go:130] > # cni_default_network = ""
	I0731 12:11:12.344224  916191 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 12:11:12.344230  916191 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 12:11:12.344239  916191 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 12:11:12.344244  916191 command_runner.go:130] > # plugin_dirs = [
	I0731 12:11:12.344251  916191 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 12:11:12.344256  916191 command_runner.go:130] > # ]
	I0731 12:11:12.344263  916191 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 12:11:12.344268  916191 command_runner.go:130] > [crio.metrics]
	I0731 12:11:12.344278  916191 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 12:11:12.344286  916191 command_runner.go:130] > # enable_metrics = false
	I0731 12:11:12.344292  916191 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 12:11:12.344300  916191 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 12:11:12.344308  916191 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 12:11:12.344318  916191 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 12:11:12.344325  916191 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 12:11:12.344333  916191 command_runner.go:130] > # metrics_collectors = [
	I0731 12:11:12.344338  916191 command_runner.go:130] > # 	"operations",
	I0731 12:11:12.344343  916191 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 12:11:12.344349  916191 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 12:11:12.344354  916191 command_runner.go:130] > # 	"operations_errors",
	I0731 12:11:12.344362  916191 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 12:11:12.344367  916191 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 12:11:12.344372  916191 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 12:11:12.344379  916191 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 12:11:12.344385  916191 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 12:11:12.344393  916191 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 12:11:12.344401  916191 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 12:11:12.344410  916191 command_runner.go:130] > # 	"containers_oom_total",
	I0731 12:11:12.344415  916191 command_runner.go:130] > # 	"containers_oom",
	I0731 12:11:12.344424  916191 command_runner.go:130] > # 	"processes_defunct",
	I0731 12:11:12.344430  916191 command_runner.go:130] > # 	"operations_total",
	I0731 12:11:12.344435  916191 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 12:11:12.344443  916191 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 12:11:12.344451  916191 command_runner.go:130] > # 	"operations_errors_total",
	I0731 12:11:12.344456  916191 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 12:11:12.344462  916191 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 12:11:12.344470  916191 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 12:11:12.344475  916191 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 12:11:12.344480  916191 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 12:11:12.344488  916191 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 12:11:12.344492  916191 command_runner.go:130] > # ]
	I0731 12:11:12.344499  916191 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 12:11:12.344507  916191 command_runner.go:130] > # metrics_port = 9090
	I0731 12:11:12.344513  916191 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 12:11:12.344520  916191 command_runner.go:130] > # metrics_socket = ""
	I0731 12:11:12.344526  916191 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 12:11:12.344537  916191 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 12:11:12.344545  916191 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 12:11:12.344554  916191 command_runner.go:130] > # certificate on any modification event.
	I0731 12:11:12.344559  916191 command_runner.go:130] > # metrics_cert = ""
	I0731 12:11:12.344568  916191 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 12:11:12.344574  916191 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 12:11:12.344581  916191 command_runner.go:130] > # metrics_key = ""
	I0731 12:11:12.344591  916191 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 12:11:12.344595  916191 command_runner.go:130] > [crio.tracing]
	I0731 12:11:12.344602  916191 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 12:11:12.344615  916191 command_runner.go:130] > # enable_tracing = false
	I0731 12:11:12.344622  916191 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 12:11:12.344628  916191 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 12:11:12.344639  916191 command_runner.go:130] > # Number of samples to collect per million spans.
	I0731 12:11:12.344645  916191 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 12:11:12.344658  916191 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 12:11:12.344665  916191 command_runner.go:130] > [crio.stats]
	I0731 12:11:12.344678  916191 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 12:11:12.344685  916191 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 12:11:12.344690  916191 command_runner.go:130] > # stats_collection_period = 0
	I0731 12:11:12.344721  916191 command_runner.go:130] ! time="2023-07-31 12:11:12.332659839Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0731 12:11:12.344737  916191 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 12:11:12.344802  916191 cni.go:84] Creating CNI manager for ""
	I0731 12:11:12.344814  916191 cni.go:136] 2 nodes found, recommending kindnet
	I0731 12:11:12.344823  916191 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 12:11:12.344842  916191 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-951087 NodeName:multinode-951087-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:11:12.344977  916191 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-951087-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:11:12.345036  916191 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-951087-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 12:11:12.345107  916191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 12:11:12.356059  916191 command_runner.go:130] > kubeadm
	I0731 12:11:12.356154  916191 command_runner.go:130] > kubectl
	I0731 12:11:12.356166  916191 command_runner.go:130] > kubelet
	I0731 12:11:12.356196  916191 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:11:12.356274  916191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0731 12:11:12.367323  916191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 12:11:12.392445  916191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:11:12.419193  916191 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0731 12:11:12.424071  916191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:11:12.438223  916191 host.go:66] Checking if "multinode-951087" exists ...
	I0731 12:11:12.438495  916191 start.go:301] JoinCluster: &{Name:multinode-951087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-951087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:11:12.438599  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 12:11:12.438655  916191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:11:12.439043  916191 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:11:12.458214  916191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:11:12.626844  916191 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nm8hb8.bij1dzhms6zo2sju --discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 
	I0731 12:11:12.630612  916191 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 12:11:12.630649  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nm8hb8.bij1dzhms6zo2sju --discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-951087-m02"
	I0731 12:11:12.675436  916191 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 12:11:12.717776  916191 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0731 12:11:12.717801  916191 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1040-aws
	I0731 12:11:12.717808  916191 command_runner.go:130] > OS: Linux
	I0731 12:11:12.717814  916191 command_runner.go:130] > CGROUPS_CPU: enabled
	I0731 12:11:12.717822  916191 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0731 12:11:12.717829  916191 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0731 12:11:12.717835  916191 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0731 12:11:12.717844  916191 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0731 12:11:12.717850  916191 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0731 12:11:12.717859  916191 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0731 12:11:12.717867  916191 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0731 12:11:12.717874  916191 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0731 12:11:12.828687  916191 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0731 12:11:12.828711  916191 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0731 12:11:12.862899  916191 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:11:12.863214  916191 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:11:12.863457  916191 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 12:11:12.963538  916191 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0731 12:11:15.979255  916191 command_runner.go:130] > This node has joined the cluster:
	I0731 12:11:15.979325  916191 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0731 12:11:15.979348  916191 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0731 12:11:15.979380  916191 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0731 12:11:15.982854  916191 command_runner.go:130] ! W0731 12:11:12.674453    1017 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0731 12:11:15.982890  916191 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0731 12:11:15.982902  916191 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:11:15.982916  916191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nm8hb8.bij1dzhms6zo2sju --discovery-token-ca-cert-hash sha256:59797f47caa702c46c8e55349da2b7fcf9d45fa97f7025328f291444513c4181 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-951087-m02": (3.352253501s)
	I0731 12:11:15.982936  916191 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 12:11:16.239392  916191 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0731 12:11:16.239421  916191 start.go:303] JoinCluster complete in 3.800924619s
	I0731 12:11:16.239432  916191 cni.go:84] Creating CNI manager for ""
	I0731 12:11:16.239439  916191 cni.go:136] 2 nodes found, recommending kindnet
	I0731 12:11:16.239491  916191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 12:11:16.244073  916191 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 12:11:16.244097  916191 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0731 12:11:16.244105  916191 command_runner.go:130] > Device: 3ah/58d	Inode: 5971530     Links: 1
	I0731 12:11:16.244196  916191 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 12:11:16.244203  916191 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0731 12:11:16.244214  916191 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0731 12:11:16.244220  916191 command_runner.go:130] > Change: 2023-07-31 11:47:52.097661811 +0000
	I0731 12:11:16.244229  916191 command_runner.go:130] >  Birth: 2023-07-31 11:47:52.053662026 +0000
	I0731 12:11:16.244269  916191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 12:11:16.244281  916191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 12:11:16.266666  916191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 12:11:16.604655  916191 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0731 12:11:16.604677  916191 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0731 12:11:16.604685  916191 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0731 12:11:16.604691  916191 command_runner.go:130] > daemonset.apps/kindnet configured
	I0731 12:11:16.605054  916191 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:11:16.605298  916191 kapi.go:59] client config for multinode-951087: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:11:16.605611  916191 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 12:11:16.605618  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:16.605627  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:16.605634  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:16.608457  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:16.608483  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:16.608492  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:16.608499  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:16.608506  916191 round_trippers.go:580]     Content-Length: 291
	I0731 12:11:16.608513  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:16 GMT
	I0731 12:11:16.608523  916191 round_trippers.go:580]     Audit-Id: e942417e-0a68-461d-899c-d168e1ee5ba0
	I0731 12:11:16.608530  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:16.608537  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:16.608567  916191 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"85b8ff0a-91d2-40c3-9d46-82ccfed95f91","resourceVersion":"417","creationTimestamp":"2023-07-31T12:10:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0731 12:11:16.608678  916191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-951087" context rescaled to 1 replicas
	I0731 12:11:16.608707  916191 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 12:11:16.610816  916191 out.go:177] * Verifying Kubernetes components...
	I0731 12:11:16.612517  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:11:16.627686  916191 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:11:16.627942  916191 kapi.go:59] client config for multinode-951087: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/profiles/multinode-951087/client.key", CAFile:"/home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e64f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:11:16.628272  916191 node_ready.go:35] waiting up to 6m0s for node "multinode-951087-m02" to be "Ready" ...
	I0731 12:11:16.628342  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:16.628352  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:16.628361  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:16.628369  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:16.631056  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:16.631077  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:16.631086  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:16 GMT
	I0731 12:11:16.631093  916191 round_trippers.go:580]     Audit-Id: 6c57fc35-9ab0-4287-97e4-85c88f9b7916
	I0731 12:11:16.631107  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:16.631119  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:16.631126  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:16.631137  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:16.631265  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"460","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0731 12:11:16.631652  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:16.631666  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:16.631675  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:16.631682  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:16.634082  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:16.634108  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:16.634125  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:16.634133  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:16.634140  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:16.634151  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:16.634158  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:16 GMT
	I0731 12:11:16.634170  916191 round_trippers.go:580]     Audit-Id: 0049634f-db27-4daf-b5bd-9b363184da6a
	I0731 12:11:16.634282  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"460","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0731 12:11:17.135399  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:17.135424  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:17.135433  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:17.135442  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:17.138232  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:17.138258  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:17.138269  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:17.138276  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:17.138283  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:17 GMT
	I0731 12:11:17.138290  916191 round_trippers.go:580]     Audit-Id: a5220003-d667-4b7c-be76-76fcab7d9c25
	I0731 12:11:17.138297  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:17.138307  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:17.138424  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"460","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0731 12:11:17.635491  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:17.635516  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:17.635526  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:17.635534  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:17.638137  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:17.638197  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:17.638213  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:17.638221  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:17.638231  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:17 GMT
	I0731 12:11:17.638238  916191 round_trippers.go:580]     Audit-Id: 4a3b5728-db7c-434a-a1ee-1cd1d2f62691
	I0731 12:11:17.638248  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:17.638255  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:17.638368  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"460","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0731 12:11:18.134928  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:18.134955  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:18.134966  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:18.134974  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:18.137872  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:18.137910  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:18.137920  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:18.137927  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:18.137935  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:18 GMT
	I0731 12:11:18.137942  916191 round_trippers.go:580]     Audit-Id: ea5ff700-f541-41bf-8aa8-dd5433b05248
	I0731 12:11:18.137960  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:18.137973  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:18.138285  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"460","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0731 12:11:18.634914  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:18.634940  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:18.634949  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:18.634957  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:18.637844  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:18.637881  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:18.637890  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:18.637897  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:18.637903  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:18.637910  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:18.637917  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:18 GMT
	I0731 12:11:18.637924  916191 round_trippers.go:580]     Audit-Id: 643bd875-9276-4440-b7ac-60f50ea72552
	I0731 12:11:18.638061  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:18.638424  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:19.134984  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:19.135009  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:19.135019  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:19.135027  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:19.137655  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:19.137715  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:19.137739  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:19.137762  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:19.137796  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:19.137823  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:19.137846  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:19 GMT
	I0731 12:11:19.137868  916191 round_trippers.go:580]     Audit-Id: d8c7cbaf-576d-46e3-8ed0-cb7179c7f18a
	I0731 12:11:19.138002  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:19.635530  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:19.635555  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:19.635564  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:19.635572  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:19.638179  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:19.638251  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:19.638274  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:19.638297  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:19.638332  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:19.638350  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:19 GMT
	I0731 12:11:19.638360  916191 round_trippers.go:580]     Audit-Id: 274e48f7-a0ff-4b67-b8f7-d38325e533cd
	I0731 12:11:19.638370  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:19.638479  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:20.134856  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:20.134879  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:20.134889  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:20.134896  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:20.137559  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:20.137586  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:20.137596  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:20 GMT
	I0731 12:11:20.137604  916191 round_trippers.go:580]     Audit-Id: 178a4fba-769b-4e7f-a0fd-0ec13e616a45
	I0731 12:11:20.137611  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:20.137617  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:20.137626  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:20.137635  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:20.137774  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:20.635240  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:20.635262  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:20.635272  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:20.635280  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:20.637886  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:20.637909  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:20.637917  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:20.637924  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:20.637931  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:20.637938  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:20.637944  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:20 GMT
	I0731 12:11:20.637951  916191 round_trippers.go:580]     Audit-Id: 85f76c8b-b9fc-4da8-98b3-cb9797ed9a7b
	I0731 12:11:20.638069  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:21.135783  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:21.135808  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:21.135817  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:21.135825  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:21.138451  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:21.138474  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:21.138483  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:21.138490  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:21.138497  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:21 GMT
	I0731 12:11:21.138504  916191 round_trippers.go:580]     Audit-Id: ce8229f9-14ab-4add-8223-a9167426c6f7
	I0731 12:11:21.138511  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:21.138520  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:21.138629  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:21.139009  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:21.635082  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:21.635124  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:21.635134  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:21.635142  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:21.637803  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:21.637825  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:21.637834  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:21.637840  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:21.637847  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:21 GMT
	I0731 12:11:21.637854  916191 round_trippers.go:580]     Audit-Id: 503524e5-691b-4fc5-a6b4-04a85cfad4fa
	I0731 12:11:21.637861  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:21.637874  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:21.637977  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:22.135619  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:22.135646  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:22.135657  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:22.135664  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:22.138245  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:22.138271  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:22.138281  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:22.138288  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:22.138295  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:22.138301  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:22.138308  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:22 GMT
	I0731 12:11:22.138315  916191 round_trippers.go:580]     Audit-Id: f5b4d1be-b6da-4780-8663-5d1e40c04f69
	I0731 12:11:22.138638  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:22.635280  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:22.635307  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:22.635317  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:22.635325  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:22.637947  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:22.637969  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:22.637978  916191 round_trippers.go:580]     Audit-Id: befbd00e-5394-4d95-80ec-5ab33e06aa3c
	I0731 12:11:22.637985  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:22.637993  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:22.637999  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:22.638006  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:22.638013  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:22 GMT
	I0731 12:11:22.638104  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:23.135756  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:23.135779  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:23.135789  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:23.135797  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:23.138425  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:23.138446  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:23.138456  916191 round_trippers.go:580]     Audit-Id: 496619c7-a4eb-4cd6-8c27-c8ffe6b30f00
	I0731 12:11:23.138464  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:23.138470  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:23.138477  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:23.138483  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:23.138490  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:23 GMT
	I0731 12:11:23.138591  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:23.635718  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:23.635743  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:23.635754  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:23.635761  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:23.638591  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:23.638618  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:23.638628  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:23 GMT
	I0731 12:11:23.638635  916191 round_trippers.go:580]     Audit-Id: ad710df0-2953-4520-a56c-5bcdd186ba2c
	I0731 12:11:23.638641  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:23.638648  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:23.638655  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:23.638661  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:23.638795  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:23.639183  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:24.134949  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:24.134974  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:24.134985  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:24.134993  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:24.137722  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:24.137752  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:24.137762  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:24.137770  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:24.137778  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:24.137786  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:24 GMT
	I0731 12:11:24.137796  916191 round_trippers.go:580]     Audit-Id: 11507d70-ec13-4d71-8cc4-3b819ee4addd
	I0731 12:11:24.137803  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:24.138066  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:24.634877  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:24.634900  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:24.634910  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:24.634918  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:24.637553  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:24.637580  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:24.637590  916191 round_trippers.go:580]     Audit-Id: fd29579a-9db9-4b75-aa52-4e90209716ab
	I0731 12:11:24.637597  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:24.637604  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:24.637611  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:24.637619  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:24.637629  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:24 GMT
	I0731 12:11:24.637745  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:25.134779  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:25.134801  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:25.134812  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:25.134819  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:25.137593  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:25.137619  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:25.137628  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:25.137637  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:25 GMT
	I0731 12:11:25.137643  916191 round_trippers.go:580]     Audit-Id: c56b7131-9947-445c-a150-d4118e4e13a4
	I0731 12:11:25.137651  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:25.137657  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:25.137667  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:25.137831  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:25.635540  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:25.635564  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:25.635574  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:25.635581  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:25.638028  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:25.638055  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:25.638065  916191 round_trippers.go:580]     Audit-Id: a186a063-ab12-419f-8dfc-9bc74fbe8b3b
	I0731 12:11:25.638074  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:25.638081  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:25.638088  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:25.638099  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:25.638106  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:25 GMT
	I0731 12:11:25.638384  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"475","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0731 12:11:26.134980  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:26.135006  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:26.135017  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:26.135039  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:26.137457  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:26.137479  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:26.137488  916191 round_trippers.go:580]     Audit-Id: 8194c7bd-cbee-4956-8086-6a9a8e4ebb79
	I0731 12:11:26.137495  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:26.137502  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:26.137510  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:26.137520  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:26.137527  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:26 GMT
	I0731 12:11:26.137668  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:26.138036  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:26.635808  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:26.635831  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:26.635841  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:26.635848  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:26.638443  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:26.638465  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:26.638474  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:26.638481  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:26.638488  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:26.638495  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:26 GMT
	I0731 12:11:26.638502  916191 round_trippers.go:580]     Audit-Id: 379bb2ac-41f1-444c-9764-e1b2436f3c38
	I0731 12:11:26.638509  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:26.638617  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:27.135417  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:27.135440  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:27.135450  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:27.135457  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:27.139610  916191 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 12:11:27.139637  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:27.139647  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:27.139655  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:27 GMT
	I0731 12:11:27.139662  916191 round_trippers.go:580]     Audit-Id: 85da9c1d-f2b7-4c91-b17a-097535b10f3c
	I0731 12:11:27.139669  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:27.139675  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:27.139683  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:27.139801  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:27.634920  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:27.634945  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:27.634960  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:27.634968  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:27.637776  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:27.637803  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:27.637813  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:27.637827  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:27.637834  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:27.637841  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:27.637848  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:27 GMT
	I0731 12:11:27.637854  916191 round_trippers.go:580]     Audit-Id: 3c9348d0-fb73-4db0-ac55-f25f8e4e3ad0
	I0731 12:11:27.637972  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:28.135662  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:28.135687  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:28.135697  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:28.135705  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:28.138251  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:28.138275  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:28.138284  916191 round_trippers.go:580]     Audit-Id: 09f3b95f-992e-422c-9987-a379f24a16c0
	I0731 12:11:28.138292  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:28.138298  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:28.138305  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:28.138312  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:28.138321  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:28 GMT
	I0731 12:11:28.138429  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:28.138790  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:28.634868  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:28.634891  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:28.634902  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:28.634909  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:28.637442  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:28.637463  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:28.637472  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:28 GMT
	I0731 12:11:28.637479  916191 round_trippers.go:580]     Audit-Id: a41212c0-9ab4-4afe-8667-c3f32910ec77
	I0731 12:11:28.637485  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:28.637492  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:28.637499  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:28.637506  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:28.637594  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:29.135180  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:29.135216  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:29.135229  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:29.135243  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:29.137877  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:29.137901  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:29.137913  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:29.137920  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:29 GMT
	I0731 12:11:29.137927  916191 round_trippers.go:580]     Audit-Id: 952ad8a5-646e-4489-8003-dcb65d7d7b31
	I0731 12:11:29.137934  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:29.137940  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:29.137947  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:29.138263  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:29.634880  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:29.634906  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:29.634915  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:29.634923  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:29.637540  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:29.637559  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:29.637567  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:29.637575  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:29 GMT
	I0731 12:11:29.637581  916191 round_trippers.go:580]     Audit-Id: 7422a71a-8529-4acd-9e11-1462a4a3ef5c
	I0731 12:11:29.637587  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:29.637594  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:29.637600  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:29.637740  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:30.135430  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:30.135462  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:30.135524  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:30.135533  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:30.138331  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:30.138361  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:30.138371  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:30.138378  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:30.138385  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:30.138392  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:30 GMT
	I0731 12:11:30.138399  916191 round_trippers.go:580]     Audit-Id: 80d14cf6-442b-4eb9-a8c7-b2b6caca089b
	I0731 12:11:30.138406  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:30.138525  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:30.138932  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:30.635764  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:30.635786  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:30.635795  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:30.635803  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:30.638419  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:30.638448  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:30.638458  916191 round_trippers.go:580]     Audit-Id: 45fb1011-898c-4704-bdc5-8ce56d1acafc
	I0731 12:11:30.638464  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:30.638471  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:30.638478  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:30.638486  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:30.638493  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:30 GMT
	I0731 12:11:30.638583  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:31.135775  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:31.135798  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:31.135809  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:31.135817  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:31.138434  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:31.138461  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:31.138471  916191 round_trippers.go:580]     Audit-Id: 50a280c8-a1e1-4c36-8f6e-0496b75031d7
	I0731 12:11:31.138478  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:31.138485  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:31.138492  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:31.138499  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:31.138506  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:31 GMT
	I0731 12:11:31.138645  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:31.635775  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:31.635797  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:31.635807  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:31.635814  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:31.638833  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:31.638855  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:31.638864  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:31.638871  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:31.638878  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:31.638888  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:31 GMT
	I0731 12:11:31.638895  916191 round_trippers.go:580]     Audit-Id: ea094074-ad84-43db-b49f-1ebc4eb3e944
	I0731 12:11:31.638901  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:31.639248  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:32.135735  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:32.135755  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:32.135765  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:32.135773  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:32.140193  916191 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 12:11:32.140221  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:32.140231  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:32.140239  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:32.140246  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:32.140258  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:32 GMT
	I0731 12:11:32.140268  916191 round_trippers.go:580]     Audit-Id: 3f72e7e5-853c-47c2-961e-8d5b0fa04378
	I0731 12:11:32.140275  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:32.140409  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:32.140786  916191 node_ready.go:58] node "multinode-951087-m02" has status "Ready":"False"
	I0731 12:11:32.635033  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:32.635056  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:32.635066  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:32.635073  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:32.637628  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:32.637652  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:32.637661  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:32 GMT
	I0731 12:11:32.637668  916191 round_trippers.go:580]     Audit-Id: 3efa743c-7b59-436c-bf1e-c99f7de4f1a4
	I0731 12:11:32.637675  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:32.637682  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:32.637689  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:32.637700  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:32.638222  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"482","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0731 12:11:33.134913  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:33.134937  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.134947  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.134955  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.137760  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.137795  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.137814  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.137823  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.137830  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.137841  916191 round_trippers.go:580]     Audit-Id: ef1bbd37-b9de-41c0-bf88-3438cec52902
	I0731 12:11:33.137848  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.137855  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.138272  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"502","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0731 12:11:33.138763  916191 node_ready.go:49] node "multinode-951087-m02" has status "Ready":"True"
	I0731 12:11:33.138788  916191 node_ready.go:38] duration metric: took 16.510495638s waiting for node "multinode-951087-m02" to be "Ready" ...
	I0731 12:11:33.138799  916191 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:11:33.138874  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 12:11:33.138885  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.138894  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.138907  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.143066  916191 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 12:11:33.143095  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.143105  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.143112  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.143119  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.143126  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.143133  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.143140  916191 round_trippers.go:580]     Audit-Id: 2b56f837-9772-466d-85c4-d514166949bd
	I0731 12:11:33.143592  916191 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"412","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 69099 chars]
	I0731 12:11:33.146570  916191 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.146661  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nb8rj
	I0731 12:11:33.146672  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.146682  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.146697  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.149720  916191 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 12:11:33.149743  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.149752  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.149759  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.149791  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.149800  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.149806  916191 round_trippers.go:580]     Audit-Id: a57cb405-eb3c-415c-83a9-100dc0e60cde
	I0731 12:11:33.149816  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.149914  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-nb8rj","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f9dc9fa3-310f-4097-89e6-75625c1e7651","resourceVersion":"412","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2a4133f9-5d7a-4f3f-854c-e3d46e752156","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4133f9-5d7a-4f3f-854c-e3d46e752156\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0731 12:11:33.150432  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.150448  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.150457  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.150464  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.152882  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.152951  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.152973  916191 round_trippers.go:580]     Audit-Id: f7ae6b90-5e86-4201-bcdb-8753f4b86026
	I0731 12:11:33.152996  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.153029  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.153044  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.153051  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.153058  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.153203  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:33.153607  916191 pod_ready.go:92] pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.153624  916191 pod_ready.go:81] duration metric: took 7.025551ms waiting for pod "coredns-5d78c9869d-nb8rj" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.153636  916191 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.153699  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951087
	I0731 12:11:33.153709  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.153717  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.153724  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.156228  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.156249  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.156257  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.156264  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.156271  916191 round_trippers.go:580]     Audit-Id: 26c6c69d-f1eb-4ea1-b923-95f4ca247816
	I0731 12:11:33.156285  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.156292  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.156299  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.156486  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951087","namespace":"kube-system","uid":"37276bdd-7289-4086-8bb0-b8f832400a26","resourceVersion":"421","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.mirror":"ce01e94000c407043449b0d977079d34","kubernetes.io/config.seen":"2023-07-31T12:10:40.309371836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0731 12:11:33.156983  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.156999  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.157007  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.157014  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.159398  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.159416  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.159424  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.159430  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.159437  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.159443  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.159450  916191 round_trippers.go:580]     Audit-Id: 55594b5f-b9c6-4d27-aec0-eee3465bc1f2
	I0731 12:11:33.159457  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.159566  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:33.159929  916191 pod_ready.go:92] pod "etcd-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.159938  916191 pod_ready.go:81] duration metric: took 6.296126ms waiting for pod "etcd-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.159955  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.160005  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-951087
	I0731 12:11:33.160010  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.160017  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.160024  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.162510  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.162533  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.162541  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.162548  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.162555  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.162562  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.162568  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.162575  916191 round_trippers.go:580]     Audit-Id: c83da2d5-ad09-4ca1-93c1-ed1795937b55
	I0731 12:11:33.162695  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-951087","namespace":"kube-system","uid":"5006315b-a9f0-4c65-a12c-532521088aca","resourceVersion":"422","creationTimestamp":"2023-07-31T12:10:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"2c12830565eebcd51287ac2e207ab987","kubernetes.io/config.mirror":"2c12830565eebcd51287ac2e207ab987","kubernetes.io/config.seen":"2023-07-31T12:10:32.705357367Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0731 12:11:33.163204  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.163212  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.163220  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.163227  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.165939  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.165957  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.165965  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.165972  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.165979  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.165985  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.165992  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.165999  916191 round_trippers.go:580]     Audit-Id: eee83f33-1be7-4696-9e7e-3545002b280a
	I0731 12:11:33.166100  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:33.166465  916191 pod_ready.go:92] pod "kube-apiserver-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.166474  916191 pod_ready.go:81] duration metric: took 6.512151ms waiting for pod "kube-apiserver-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.166485  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.166539  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-951087
	I0731 12:11:33.166543  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.166550  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.166557  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.169183  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.169248  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.169262  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.169271  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.169278  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.169285  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.169292  916191 round_trippers.go:580]     Audit-Id: 411f86ee-374d-464f-9275-c8a83970ba97
	I0731 12:11:33.169298  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.169828  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-951087","namespace":"kube-system","uid":"aec99526-80d6-49c6-9b47-37c2bc155692","resourceVersion":"423","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"030c32208cdf1c653a4b957eca963c70","kubernetes.io/config.mirror":"030c32208cdf1c653a4b957eca963c70","kubernetes.io/config.seen":"2023-07-31T12:10:40.309381493Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0731 12:11:33.170435  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.170451  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.170461  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.170470  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.173154  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.173179  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.173188  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.173196  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.173202  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.173210  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.173217  916191 round_trippers.go:580]     Audit-Id: 15f85412-f2cb-4cf9-9ff6-c3f0752c3448
	I0731 12:11:33.173226  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.173365  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:33.173805  916191 pod_ready.go:92] pod "kube-controller-manager-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.173821  916191 pod_ready.go:81] duration metric: took 7.328771ms waiting for pod "kube-controller-manager-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.173834  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4tt8k" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.335219  916191 request.go:628] Waited for 161.305713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tt8k
	I0731 12:11:33.335300  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4tt8k
	I0731 12:11:33.335309  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.335319  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.335326  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.337969  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.337997  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.338006  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.338015  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.338022  916191 round_trippers.go:580]     Audit-Id: fc97ef3b-55c6-4a72-965a-fa1f9391cd39
	I0731 12:11:33.338032  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.338046  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.338053  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.338157  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4tt8k","generateName":"kube-proxy-","namespace":"kube-system","uid":"33763521-3e50-4038-9b61-15716fd40373","resourceVersion":"493","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0731 12:11:33.534945  916191 request.go:628] Waited for 196.3063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:33.535032  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087-m02
	I0731 12:11:33.535060  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.535071  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.535084  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.537767  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.537837  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.537861  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.537874  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.537881  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.537888  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.537897  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.537906  916191 round_trippers.go:580]     Audit-Id: 26a63342-4c7b-4e52-8b9c-6a81b21e0b2b
	I0731 12:11:33.538047  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087-m02","uid":"b65d5bdd-e43f-4db7-b833-d69af1b56f86","resourceVersion":"503","creationTimestamp":"2023-07-31T12:11:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I0731 12:11:33.538428  916191 pod_ready.go:92] pod "kube-proxy-4tt8k" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.538445  916191 pod_ready.go:81] duration metric: took 364.600346ms waiting for pod "kube-proxy-4tt8k" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.538457  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2ljd" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.735926  916191 request.go:628] Waited for 197.351268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2ljd
	I0731 12:11:33.735990  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2ljd
	I0731 12:11:33.736000  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.736009  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.736017  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.738598  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.738625  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.738637  916191 round_trippers.go:580]     Audit-Id: f39f8b8f-8d1f-4c48-acd9-13e984df8c69
	I0731 12:11:33.738644  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.738651  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.738686  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.738718  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.738733  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.738856  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x2ljd","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae696871-fdaa-44a8-8f72-914cf534dd5c","resourceVersion":"388","creationTimestamp":"2023-07-31T12:10:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5fd1aa1d-5807-48c6-81a3-ef82b2dd0da1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0731 12:11:33.935772  916191 request.go:628] Waited for 196.360347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.935830  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:33.935835  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:33.935844  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:33.935882  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:33.938650  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:33.938724  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:33.938742  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:33.938750  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:33 GMT
	I0731 12:11:33.938761  916191 round_trippers.go:580]     Audit-Id: ba887739-37cf-40ea-ba56-d12e6a76ccae
	I0731 12:11:33.938769  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:33.938808  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:33.938821  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:33.939240  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:33.939669  916191 pod_ready.go:92] pod "kube-proxy-x2ljd" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:33.939686  916191 pod_ready.go:81] duration metric: took 401.222713ms waiting for pod "kube-proxy-x2ljd" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:33.939698  916191 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:34.135065  916191 request.go:628] Waited for 195.279539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951087
	I0731 12:11:34.135140  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951087
	I0731 12:11:34.135152  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:34.135161  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:34.135169  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:34.137883  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:34.137905  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:34.137914  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:34.137948  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:34.137963  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:34 GMT
	I0731 12:11:34.137970  916191 round_trippers.go:580]     Audit-Id: 99c8c606-45fa-41b4-907e-ea9a7cdde38d
	I0731 12:11:34.137977  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:34.137986  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:34.138145  916191 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-951087","namespace":"kube-system","uid":"6bb8f158-b688-4e46-a49b-5caaafc0516a","resourceVersion":"420","creationTimestamp":"2023-07-31T12:10:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c0663a64e088b0ec1a92123f2a642643","kubernetes.io/config.mirror":"c0663a64e088b0ec1a92123f2a642643","kubernetes.io/config.seen":"2023-07-31T12:10:40.309382749Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T12:10:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0731 12:11:34.335960  916191 request.go:628] Waited for 197.341333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:34.336073  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951087
	I0731 12:11:34.336086  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:34.336097  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:34.336104  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:34.338697  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:34.338720  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:34.338730  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:34.338737  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:34.338743  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:34.338750  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:34.338757  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:34 GMT
	I0731 12:11:34.338763  916191 round_trippers.go:580]     Audit-Id: 0de0252c-6989-4210-b6d4-74f6822e4a0a
	I0731 12:11:34.338877  916191 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T12:10:37Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0731 12:11:34.339344  916191 pod_ready.go:92] pod "kube-scheduler-multinode-951087" in "kube-system" namespace has status "Ready":"True"
	I0731 12:11:34.339361  916191 pod_ready.go:81] duration metric: took 399.655431ms waiting for pod "kube-scheduler-multinode-951087" in "kube-system" namespace to be "Ready" ...
	I0731 12:11:34.339373  916191 pod_ready.go:38] duration metric: took 1.200564615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:11:34.339391  916191 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 12:11:34.339450  916191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:11:34.352685  916191 system_svc.go:56] duration metric: took 13.28455ms WaitForService to wait for kubelet.
	I0731 12:11:34.352711  916191 kubeadm.go:581] duration metric: took 17.743971971s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 12:11:34.352757  916191 node_conditions.go:102] verifying NodePressure condition ...
	I0731 12:11:34.535193  916191 request.go:628] Waited for 182.361387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0731 12:11:34.535278  916191 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0731 12:11:34.535306  916191 round_trippers.go:469] Request Headers:
	I0731 12:11:34.535319  916191 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0731 12:11:34.535328  916191 round_trippers.go:473]     Accept: application/json, */*
	I0731 12:11:34.538069  916191 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 12:11:34.538095  916191 round_trippers.go:577] Response Headers:
	I0731 12:11:34.538104  916191 round_trippers.go:580]     Date: Mon, 31 Jul 2023 12:11:34 GMT
	I0731 12:11:34.538111  916191 round_trippers.go:580]     Audit-Id: 65e9b81b-acf7-4f4b-9d26-39cd8fe37b20
	I0731 12:11:34.538119  916191 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 12:11:34.538149  916191 round_trippers.go:580]     Content-Type: application/json
	I0731 12:11:34.538169  916191 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e940db-7d10-434a-b2c7-1c397046bfd0
	I0731 12:11:34.538181  916191 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5a41df8f-092c-4d45-8f08-b94e31a1dfa3
	I0731 12:11:34.538361  916191 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"506"},"items":[{"metadata":{"name":"multinode-951087","uid":"e3be4004-f21d-4a48-912e-b9e16e85da9f","resourceVersion":"393","creationTimestamp":"2023-07-31T12:10:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951087","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-951087","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T12_10_41_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I0731 12:11:34.539004  916191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:11:34.539038  916191 node_conditions.go:123] node cpu capacity is 2
	I0731 12:11:34.539050  916191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:11:34.539056  916191 node_conditions.go:123] node cpu capacity is 2
	I0731 12:11:34.539061  916191 node_conditions.go:105] duration metric: took 186.297957ms to run NodePressure ...
	I0731 12:11:34.539077  916191 start.go:228] waiting for startup goroutines ...
	I0731 12:11:34.539106  916191 start.go:242] writing updated cluster config ...
	I0731 12:11:34.539436  916191 ssh_runner.go:195] Run: rm -f paused
	I0731 12:11:34.600601  916191 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 12:11:34.603603  916191 out.go:177] * Done! kubectl is now configured to use "multinode-951087" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 12:10:57 multinode-951087 crio[899]: time="2023-07-31 12:10:57.712537729Z" level=info msg="Created container 140a0c4017897d3aa641392d91b7a59f5dcfc5f9513a8a4b44539a1a31793607: kube-system/storage-provisioner/storage-provisioner" id=ceb3b3ee-610e-4d10-878b-00656a0cfdbf name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:10:57 multinode-951087 crio[899]: time="2023-07-31 12:10:57.713113497Z" level=info msg="Starting container: 140a0c4017897d3aa641392d91b7a59f5dcfc5f9513a8a4b44539a1a31793607" id=ecfc39d4-1e2f-44af-b3de-c827c201e8ab name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 12:10:57 multinode-951087 crio[899]: time="2023-07-31 12:10:57.714759941Z" level=info msg="Starting container: 72b0a32419e1776d9d0aa53f944c8f475dfadc38315a061cc9d8b0d85631496a" id=00983c1b-8bb1-43b1-9f58-98feb6cdc37a name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 12:10:57 multinode-951087 crio[899]: time="2023-07-31 12:10:57.731320968Z" level=info msg="Started container" PID=1944 containerID=140a0c4017897d3aa641392d91b7a59f5dcfc5f9513a8a4b44539a1a31793607 description=kube-system/storage-provisioner/storage-provisioner id=ecfc39d4-1e2f-44af-b3de-c827c201e8ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=37c5b3a4ff4b0126212d1df8a60d7263fb9c1c549184f43d382419cace3864e0
	Jul 31 12:10:57 multinode-951087 crio[899]: time="2023-07-31 12:10:57.736757807Z" level=info msg="Started container" PID=1962 containerID=72b0a32419e1776d9d0aa53f944c8f475dfadc38315a061cc9d8b0d85631496a description=kube-system/coredns-5d78c9869d-nb8rj/coredns id=00983c1b-8bb1-43b1-9f58-98feb6cdc37a name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d469ef5db01c86748f38cdb8abec0ef3011a5f12b50baef9c1012d9dc751e7
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.828423201Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-bbjrl/POD" id=0893d017-f2fe-4082-90fb-cd4640ddaaf7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.828478085Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.850857841Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-bbjrl Namespace:default ID:c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49 UID:f6f07149-fbae-476b-85af-86c7fd86f5e0 NetNS:/var/run/netns/238a5f1a-9d7d-4880-8446-88fabebc9fd5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.850912520Z" level=info msg="Adding pod default_busybox-67b7f59bb-bbjrl to CNI network \"kindnet\" (type=ptp)"
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.871219972Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-bbjrl Namespace:default ID:c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49 UID:f6f07149-fbae-476b-85af-86c7fd86f5e0 NetNS:/var/run/netns/238a5f1a-9d7d-4880-8446-88fabebc9fd5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.871378864Z" level=info msg="Checking pod default_busybox-67b7f59bb-bbjrl for CNI network kindnet (type=ptp)"
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.891632317Z" level=info msg="Ran pod sandbox c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49 with infra container: default/busybox-67b7f59bb-bbjrl/POD" id=0893d017-f2fe-4082-90fb-cd4640ddaaf7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.892672412Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ad09c710-4a8d-429b-95c4-24b3a833a3ed name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.892958934Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=ad09c710-4a8d-429b-95c4-24b3a833a3ed name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.895688361Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e4ef449d-d21b-4892-855d-259814bf1599 name=/runtime.v1.ImageService/PullImage
	Jul 31 12:11:35 multinode-951087 crio[899]: time="2023-07-31 12:11:35.896967298Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 31 12:11:36 multinode-951087 crio[899]: time="2023-07-31 12:11:36.635738206Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.894277183Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=e4ef449d-d21b-4892-855d-259814bf1599 name=/runtime.v1.ImageService/PullImage
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.895854556Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=3e612bbb-238c-4ced-9851-6f6f017ec544 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.896823956Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3e612bbb-238c-4ced-9851-6f6f017ec544 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.897761676Z" level=info msg="Creating container: default/busybox-67b7f59bb-bbjrl/busybox" id=f62d26dd-ab5d-4e46-8b0e-5073a2523397 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.897853679Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.987059779Z" level=info msg="Created container 5ff1b4d1fa054f7b36076e00fb02a50ee8841ec5e91b879cee3f4522eef7d2db: default/busybox-67b7f59bb-bbjrl/busybox" id=f62d26dd-ab5d-4e46-8b0e-5073a2523397 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:11:37 multinode-951087 crio[899]: time="2023-07-31 12:11:37.987823641Z" level=info msg="Starting container: 5ff1b4d1fa054f7b36076e00fb02a50ee8841ec5e91b879cee3f4522eef7d2db" id=2978b3a4-5e86-443e-863e-c8181cf8973d name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 12:11:38 multinode-951087 crio[899]: time="2023-07-31 12:11:38.003851386Z" level=info msg="Started container" PID=2100 containerID=5ff1b4d1fa054f7b36076e00fb02a50ee8841ec5e91b879cee3f4522eef7d2db description=default/busybox-67b7f59bb-bbjrl/busybox id=2978b3a4-5e86-443e-863e-c8181cf8973d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5ff1b4d1fa054       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   c8cf452ad81a8       busybox-67b7f59bb-bbjrl
	72b0a32419e17       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      45 seconds ago       Running             coredns                   0                   49d469ef5db01       coredns-5d78c9869d-nb8rj
	140a0c4017897       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      45 seconds ago       Running             storage-provisioner       0                   37c5b3a4ff4b0       storage-provisioner
	78a00204d0c84       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                      46 seconds ago       Running             kube-proxy                0                   c53d668a7c11e       kube-proxy-x2ljd
	515991a7c4ca2       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      46 seconds ago       Running             kindnet-cni               0                   f505d00509d1e       kindnet-4cjwb
	30ecad503232d       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                      About a minute ago   Running             kube-controller-manager   0                   8b5edfb3e7ce2       kube-controller-manager-multinode-951087
	ba87f25a7a1d7       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   5d4faa82ec275       etcd-multinode-951087
	1291cea841ceb       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                      About a minute ago   Running             kube-apiserver            0                   a7df28e559cb6       kube-apiserver-multinode-951087
	6d12070e7dfbf       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                      About a minute ago   Running             kube-scheduler            0                   2e467bbe8a715       kube-scheduler-multinode-951087
	
	* 
	* ==> coredns [72b0a32419e1776d9d0aa53f944c8f475dfadc38315a061cc9d8b0d85631496a] <==
	* [INFO] 10.244.0.3:35349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105829s
	[INFO] 10.244.1.2:42332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147101s
	[INFO] 10.244.1.2:38141 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001119931s
	[INFO] 10.244.1.2:42039 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000112065s
	[INFO] 10.244.1.2:38914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065714s
	[INFO] 10.244.1.2:50269 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00089713s
	[INFO] 10.244.1.2:37530 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076668s
	[INFO] 10.244.1.2:50640 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000752s
	[INFO] 10.244.1.2:44041 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089444s
	[INFO] 10.244.0.3:54266 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114544s
	[INFO] 10.244.0.3:58480 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007593s
	[INFO] 10.244.0.3:37070 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048705s
	[INFO] 10.244.0.3:37778 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043175s
	[INFO] 10.244.1.2:38520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012068s
	[INFO] 10.244.1.2:41499 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083946s
	[INFO] 10.244.1.2:47817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079122s
	[INFO] 10.244.1.2:33337 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062769s
	[INFO] 10.244.0.3:50542 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082075s
	[INFO] 10.244.0.3:49195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122928s
	[INFO] 10.244.0.3:58199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097427s
	[INFO] 10.244.0.3:37723 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083938s
	[INFO] 10.244.1.2:58908 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131527s
	[INFO] 10.244.1.2:52791 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000062793s
	[INFO] 10.244.1.2:60446 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000067208s
	[INFO] 10.244.1.2:34683 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000060497s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-951087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-951087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=multinode-951087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T12_10_41_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 12:10:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-951087
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 12:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 12:11:41 +0000   Mon, 31 Jul 2023 12:10:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 12:11:41 +0000   Mon, 31 Jul 2023 12:10:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 12:11:41 +0000   Mon, 31 Jul 2023 12:10:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 12:11:41 +0000   Mon, 31 Jul 2023 12:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-951087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f00bd550e834ed5a530c3f5962668d7
	  System UUID:                82d51fb0-4664-4815-a41e-d2006db456c9
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-bbjrl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5d78c9869d-nb8rj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     49s
	  kube-system                 etcd-multinode-951087                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-4cjwb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      49s
	  kube-system                 kube-apiserver-multinode-951087             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-multinode-951087    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-proxy-x2ljd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-scheduler-multinode-951087             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 46s   kube-proxy       
	  Normal  Starting                 63s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s   kubelet          Node multinode-951087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s   kubelet          Node multinode-951087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s   kubelet          Node multinode-951087 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s   node-controller  Node multinode-951087 event: Registered Node multinode-951087 in Controller
	  Normal  NodeReady                46s   kubelet          Node multinode-951087 status is now: NodeReady
	
	
	Name:               multinode-951087-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-951087-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 12:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-951087-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 12:11:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 12:11:32 +0000   Mon, 31 Jul 2023 12:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 12:11:32 +0000   Mon, 31 Jul 2023 12:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 12:11:32 +0000   Mon, 31 Jul 2023 12:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 12:11:32 +0000   Mon, 31 Jul 2023 12:11:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-951087-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 1550ce0cea7840ffa9faffc63857b954
	  System UUID:                e37496ac-197d-47c4-944c-818c55dfac8f
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-sssw6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-7ncf2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      28s
	  kube-system                 kube-proxy-4tt8k           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientMemory  28s (x5 over 29s)  kubelet          Node multinode-951087-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x5 over 29s)  kubelet          Node multinode-951087-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x5 over 29s)  kubelet          Node multinode-951087-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node multinode-951087-m02 event: Registered Node multinode-951087-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-951087-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001037] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000001a6bd468
	[  +0.001024] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +0.005951] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=0000000040ec07b0
	[  +0.001100] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000739] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000bcbfd487
	[  +0.001134] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +2.785467] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000006d9d7fe3
	[  +0.001098] FS-Cache: O-key=[8] 'ebe1c90000000000'
	[  +0.000685] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=0000000073926d86
	[  +0.001020] FS-Cache: N-key=[8] 'ebe1c90000000000'
	[  +0.282652] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000008660d6a7
	[  +0.001083] FS-Cache: O-key=[8] 'f4e1c90000000000'
	[  +0.000746] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000007f9efda2
	[  +0.001104] FS-Cache: N-key=[8] 'f4e1c90000000000'
	
	* 
	* ==> etcd [ba87f25a7a1d742c4246e14bac7ca1f86d838ef9a6249fe4dc409a8758462984] <==
	* {"level":"info","ts":"2023-07-31T12:10:33.647Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T12:10:33.647Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-31T12:10:33.647Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-31T12:10:33.647Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T12:10:33.648Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-07-31T12:10:33.648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-07-31T12:10:33.648Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-31T12:10:33.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-31T12:10:33.924Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:10:33.925Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-951087 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T12:10:33.925Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:10:33.926Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:10:33.927Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-31T12:10:33.928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:10:33.928Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T12:10:33.928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T12:10:33.928Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:10:33.939Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:10:33.944Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  12:11:43 up 19:54,  0 users,  load average: 1.58, 2.05, 2.41
	Linux multinode-951087 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [515991a7c4ca23e5da50cb23c18fa26002a9751fa0777f30ecb78b5fe4820e71] <==
	* I0731 12:10:56.433592       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 12:10:56.433664       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0731 12:10:56.433781       1 main.go:116] setting mtu 1500 for CNI 
	I0731 12:10:56.433795       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 12:10:56.433805       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 12:10:56.827424       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 12:10:56.827522       1 main.go:227] handling current node
	I0731 12:11:06.939808       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 12:11:06.939834       1 main.go:227] handling current node
	I0731 12:11:16.950291       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 12:11:16.950323       1 main.go:227] handling current node
	I0731 12:11:16.950334       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0731 12:11:16.950339       1 main.go:250] Node multinode-951087-m02 has CIDR [10.244.1.0/24] 
	I0731 12:11:16.950495       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0731 12:11:26.963589       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 12:11:26.963614       1 main.go:227] handling current node
	I0731 12:11:26.963624       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0731 12:11:26.963630       1 main.go:250] Node multinode-951087-m02 has CIDR [10.244.1.0/24] 
	I0731 12:11:36.976343       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 12:11:36.976370       1 main.go:227] handling current node
	I0731 12:11:36.976382       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0731 12:11:36.976389       1 main.go:250] Node multinode-951087-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [1291cea841ceb7e716413a5016086171a49b16c0a653fedd6d18b48a5e012246] <==
	* I0731 12:10:37.420224       1 shared_informer.go:318] Caches are synced for configmaps
	I0731 12:10:37.386631       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 12:10:37.420960       1 aggregator.go:152] initial CRD sync complete...
	I0731 12:10:37.421035       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 12:10:37.421067       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 12:10:37.421112       1 cache.go:39] Caches are synced for autoregister controller
	I0731 12:10:37.419882       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0731 12:10:37.439486       1 controller.go:624] quota admission added evaluator for: namespaces
	I0731 12:10:37.506187       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 12:10:37.875055       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 12:10:38.187054       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 12:10:38.193620       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 12:10:38.193642       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 12:10:38.728315       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 12:10:38.782322       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 12:10:38.893776       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 12:10:38.899920       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0731 12:10:38.900945       1 controller.go:624] quota admission added evaluator for: endpoints
	I0731 12:10:38.905318       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 12:10:39.311608       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0731 12:10:40.228220       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0731 12:10:40.241030       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 12:10:40.252507       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0731 12:10:54.098594       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0731 12:10:54.154925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [30ecad503232da6c47376352b19914a552d87f0eabdcc734a0fd130e21ab729c] <==
	* I0731 12:10:53.347439       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:10:53.352971       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:10:53.390419       1 shared_informer.go:318] Caches are synced for disruption
	I0731 12:10:53.393897       1 shared_informer.go:318] Caches are synced for deployment
	I0731 12:10:53.791413       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 12:10:53.791529       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0731 12:10:53.832077       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 12:10:54.110008       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x2ljd"
	I0731 12:10:54.117688       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4cjwb"
	I0731 12:10:54.165148       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0731 12:10:54.251727       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-2zdp9"
	I0731 12:10:54.261951       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-nb8rj"
	I0731 12:10:54.448484       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0731 12:10:54.565085       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-2zdp9"
	I0731 12:10:58.180163       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0731 12:11:15.586431       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-951087-m02\" does not exist"
	I0731 12:11:15.598104       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-951087-m02" podCIDRs=[10.244.1.0/24]
	I0731 12:11:15.608275       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4tt8k"
	I0731 12:11:15.608889       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7ncf2"
	I0731 12:11:18.182200       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-951087-m02"
	I0731 12:11:18.182198       1 event.go:307] "Event occurred" object="multinode-951087-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-951087-m02 event: Registered Node multinode-951087-m02 in Controller"
	W0731 12:11:32.755610       1 topologycache.go:232] Can't get CPU or zone information for multinode-951087-m02 node
	I0731 12:11:35.464598       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0731 12:11:35.484526       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-sssw6"
	I0731 12:11:35.506262       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-bbjrl"
	
	* 
	* ==> kube-proxy [78a00204d0c84d5beeb05e396e30053676023f297f9136c0f0dfe435cb036f63] <==
	* I0731 12:10:56.480500       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0731 12:10:56.480612       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0731 12:10:56.480630       1 server_others.go:554] "Using iptables proxy"
	I0731 12:10:56.536455       1 server_others.go:192] "Using iptables Proxier"
	I0731 12:10:56.536503       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 12:10:56.536513       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 12:10:56.536527       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 12:10:56.536593       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 12:10:56.537132       1 server.go:658] "Version info" version="v1.27.3"
	I0731 12:10:56.537153       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 12:10:56.538540       1 config.go:188] "Starting service config controller"
	I0731 12:10:56.538560       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 12:10:56.538592       1 config.go:97] "Starting endpoint slice config controller"
	I0731 12:10:56.538596       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 12:10:56.540609       1 config.go:315] "Starting node config controller"
	I0731 12:10:56.540633       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 12:10:56.638873       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 12:10:56.639017       1 shared_informer.go:318] Caches are synced for service config
	I0731 12:10:56.640825       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6d12070e7dfbf8f4746d1ee019adc88b1165027ba5bb5c54d42e6945a6a711ac] <==
	* W0731 12:10:37.398856       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 12:10:37.398906       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 12:10:37.399088       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 12:10:37.399133       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 12:10:37.400264       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 12:10:37.402761       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 12:10:37.400308       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 12:10:37.402846       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 12:10:37.400324       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 12:10:37.402885       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 12:10:38.241694       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 12:10:38.241816       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 12:10:38.359712       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 12:10:38.359747       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 12:10:38.369452       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 12:10:38.369490       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 12:10:38.386668       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 12:10:38.386785       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 12:10:38.449520       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 12:10:38.449560       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 12:10:38.475070       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 12:10:38.475107       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 12:10:38.725985       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 12:10:38.726047       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 12:10:41.075751       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 12:10:54 multinode-951087 kubelet[1394]: I0731 12:10:54.234531    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27h88\" (UniqueName: \"kubernetes.io/projected/ae696871-fdaa-44a8-8f72-914cf534dd5c-kube-api-access-27h88\") pod \"kube-proxy-x2ljd\" (UID: \"ae696871-fdaa-44a8-8f72-914cf534dd5c\") " pod="kube-system/kube-proxy-x2ljd"
	Jul 31 12:10:54 multinode-951087 kubelet[1394]: I0731 12:10:54.234553    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae696871-fdaa-44a8-8f72-914cf534dd5c-lib-modules\") pod \"kube-proxy-x2ljd\" (UID: \"ae696871-fdaa-44a8-8f72-914cf534dd5c\") " pod="kube-system/kube-proxy-x2ljd"
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.335863    1394 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.335971    1394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae696871-fdaa-44a8-8f72-914cf534dd5c-kube-proxy podName:ae696871-fdaa-44a8-8f72-914cf534dd5c nodeName:}" failed. No retries permitted until 2023-07-31 12:10:55.83594687 +0000 UTC m=+15.634085547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ae696871-fdaa-44a8-8f72-914cf534dd5c-kube-proxy") pod "kube-proxy-x2ljd" (UID: "ae696871-fdaa-44a8-8f72-914cf534dd5c") : failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366255    1394 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366293    1394 projected.go:198] Error preparing data for projected volume kube-api-access-wkk9s for pod kube-system/kindnet-4cjwb: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366372    1394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54bdceae-01af-4821-9cf7-298343953a96-kube-api-access-wkk9s podName:54bdceae-01af-4821-9cf7-298343953a96 nodeName:}" failed. No retries permitted until 2023-07-31 12:10:55.866351166 +0000 UTC m=+15.664489842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wkk9s" (UniqueName: "kubernetes.io/projected/54bdceae-01af-4821-9cf7-298343953a96-kube-api-access-wkk9s") pod "kindnet-4cjwb" (UID: "54bdceae-01af-4821-9cf7-298343953a96") : failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366567    1394 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366589    1394 projected.go:198] Error preparing data for projected volume kube-api-access-27h88 for pod kube-system/kube-proxy-x2ljd: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:55 multinode-951087 kubelet[1394]: E0731 12:10:55.366623    1394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae696871-fdaa-44a8-8f72-914cf534dd5c-kube-api-access-27h88 podName:ae696871-fdaa-44a8-8f72-914cf534dd5c nodeName:}" failed. No retries permitted until 2023-07-31 12:10:55.866612966 +0000 UTC m=+15.664751643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27h88" (UniqueName: "kubernetes.io/projected/ae696871-fdaa-44a8-8f72-914cf534dd5c-kube-api-access-27h88") pod "kube-proxy-x2ljd" (UID: "ae696871-fdaa-44a8-8f72-914cf534dd5c") : failed to sync configmap cache: timed out waiting for the condition
	Jul 31 12:10:56 multinode-951087 kubelet[1394]: W0731 12:10:56.253063    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/crio-c53d668a7c11e123ca8c8d3688e544e3711c6b140e99639c3b88fb66062cb59f WatchSource:0}: Error finding container c53d668a7c11e123ca8c8d3688e544e3711c6b140e99639c3b88fb66062cb59f: Status 404 returned error can't find the container with id c53d668a7c11e123ca8c8d3688e544e3711c6b140e99639c3b88fb66062cb59f
	Jul 31 12:10:56 multinode-951087 kubelet[1394]: I0731 12:10:56.505201    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4cjwb" podStartSLOduration=2.505160395 podCreationTimestamp="2023-07-31 12:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 12:10:56.50400295 +0000 UTC m=+16.302141627" watchObservedRunningTime="2023-07-31 12:10:56.505160395 +0000 UTC m=+16.303299080"
	Jul 31 12:10:56 multinode-951087 kubelet[1394]: I0731 12:10:56.521917    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x2ljd" podStartSLOduration=2.52187146 podCreationTimestamp="2023-07-31 12:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 12:10:56.520926495 +0000 UTC m=+16.319065172" watchObservedRunningTime="2023-07-31 12:10:56.52187146 +0000 UTC m=+16.320010137"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.222239    1394 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.249840    1394 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.253163    1394 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.362169    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqtw\" (UniqueName: \"kubernetes.io/projected/f9dc9fa3-310f-4097-89e6-75625c1e7651-kube-api-access-zcqtw\") pod \"coredns-5d78c9869d-nb8rj\" (UID: \"f9dc9fa3-310f-4097-89e6-75625c1e7651\") " pod="kube-system/coredns-5d78c9869d-nb8rj"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.362230    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9dc9fa3-310f-4097-89e6-75625c1e7651-config-volume\") pod \"coredns-5d78c9869d-nb8rj\" (UID: \"f9dc9fa3-310f-4097-89e6-75625c1e7651\") " pod="kube-system/coredns-5d78c9869d-nb8rj"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.362261    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/78fd5833-1fa8-4e9a-8411-0c5880d460a7-tmp\") pod \"storage-provisioner\" (UID: \"78fd5833-1fa8-4e9a-8411-0c5880d460a7\") " pod="kube-system/storage-provisioner"
	Jul 31 12:10:57 multinode-951087 kubelet[1394]: I0731 12:10:57.362285    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vsg\" (UniqueName: \"kubernetes.io/projected/78fd5833-1fa8-4e9a-8411-0c5880d460a7-kube-api-access-c2vsg\") pod \"storage-provisioner\" (UID: \"78fd5833-1fa8-4e9a-8411-0c5880d460a7\") " pod="kube-system/storage-provisioner"
	Jul 31 12:10:58 multinode-951087 kubelet[1394]: I0731 12:10:58.545648    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-nb8rj" podStartSLOduration=4.545604337 podCreationTimestamp="2023-07-31 12:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 12:10:58.512181779 +0000 UTC m=+18.310320472" watchObservedRunningTime="2023-07-31 12:10:58.545604337 +0000 UTC m=+18.343743022"
	Jul 31 12:11:00 multinode-951087 kubelet[1394]: I0731 12:11:00.423852    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.423806145 podCreationTimestamp="2023-07-31 12:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 12:10:58.555137679 +0000 UTC m=+18.353276364" watchObservedRunningTime="2023-07-31 12:11:00.423806145 +0000 UTC m=+20.221944830"
	Jul 31 12:11:35 multinode-951087 kubelet[1394]: I0731 12:11:35.526839    1394 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 12:11:35 multinode-951087 kubelet[1394]: I0731 12:11:35.611614    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5pfm\" (UniqueName: \"kubernetes.io/projected/f6f07149-fbae-476b-85af-86c7fd86f5e0-kube-api-access-f5pfm\") pod \"busybox-67b7f59bb-bbjrl\" (UID: \"f6f07149-fbae-476b-85af-86c7fd86f5e0\") " pod="default/busybox-67b7f59bb-bbjrl"
	Jul 31 12:11:35 multinode-951087 kubelet[1394]: W0731 12:11:35.889840    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/crio-c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49 WatchSource:0}: Error finding container c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49: Status 404 returned error can't find the container with id c8cf452ad81a80a6a68acf07b0f78d1e70c0e9f3d52d6feef744a15cf9b94e49
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-951087 -n multinode-951087
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-951087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.3920325263.exe start -p running-upgrade-307223 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.3920325263.exe start -p running-upgrade-307223 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.871622775s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-307223 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-307223 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.2689941s)

                                                
                                                
-- stdout --
	* [running-upgrade-307223] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-307223 in cluster running-upgrade-307223
	* Pulling base image ...
	* Updating the running docker "running-upgrade-307223" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:27:12.833635  976590 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:27:12.833764  976590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:27:12.833774  976590 out.go:309] Setting ErrFile to fd 2...
	I0731 12:27:12.833780  976590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:27:12.834048  976590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:27:12.834528  976590 out.go:303] Setting JSON to false
	I0731 12:27:12.835861  976590 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72580,"bootTime":1690733853,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:27:12.835937  976590 start.go:138] virtualization:  
	I0731 12:27:12.840196  976590 out.go:177] * [running-upgrade-307223] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:27:12.841947  976590 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:27:12.843740  976590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:27:12.841959  976590 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0731 12:27:12.841994  976590 notify.go:220] Checking for updates...
	I0731 12:27:12.847709  976590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:27:12.849430  976590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:27:12.850980  976590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:27:12.853152  976590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:27:12.855523  976590 config.go:182] Loaded profile config "running-upgrade-307223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:27:12.858249  976590 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 12:27:12.860094  976590 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:27:12.914131  976590 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:27:12.914225  976590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:27:13.031432  976590 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0731 12:27:13.057838  976590 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-07-31 12:27:13.047215575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:27:13.057954  976590 docker.go:294] overlay module found
	I0731 12:27:13.060558  976590 out.go:177] * Using the docker driver based on existing profile
	I0731 12:27:13.062154  976590 start.go:298] selected driver: docker
	I0731 12:27:13.062175  976590 start.go:898] validating driver "docker" against &{Name:running-upgrade-307223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-307223 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.24 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:27:13.062287  976590 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:27:13.062999  976590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:27:13.146943  976590 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-07-31 12:27:13.136502352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:27:13.147248  976590 cni.go:84] Creating CNI manager for ""
	I0731 12:27:13.147265  976590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:27:13.147279  976590 start_flags.go:319] config:
	{Name:running-upgrade-307223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-307223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.24 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:27:13.149350  976590 out.go:177] * Starting control plane node running-upgrade-307223 in cluster running-upgrade-307223
	I0731 12:27:13.151138  976590 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:27:13.152601  976590 out.go:177] * Pulling base image ...
	I0731 12:27:13.154216  976590 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0731 12:27:13.154297  976590 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0731 12:27:13.174468  976590 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0731 12:27:13.174491  976590 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0731 12:27:13.229345  976590 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0731 12:27:13.229507  976590 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/running-upgrade-307223/config.json ...
	I0731 12:27:13.229769  976590 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:27:13.229815  976590 start.go:365] acquiring machines lock for running-upgrade-307223: {Name:mk1ffacc9148ed75ebbe62ded6470a1b20d0891c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.229875  976590 start.go:369] acquired machines lock for "running-upgrade-307223" in 31.582µs
	I0731 12:27:13.229894  976590 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:27:13.229900  976590 fix.go:54] fixHost starting: 
	I0731 12:27:13.230158  976590 cli_runner.go:164] Run: docker container inspect running-upgrade-307223 --format={{.State.Status}}
	I0731 12:27:13.230432  976590 cache.go:107] acquiring lock: {Name:mkd51221a454e7bc0392003b0a5b9da46fc265f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230509  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:27:13.230522  976590 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.455µs
	I0731 12:27:13.230531  976590 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:27:13.230539  976590 cache.go:107] acquiring lock: {Name:mk5fb93024f08d45bee9e1431aaeb4cf2540c7db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230573  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0731 12:27:13.230578  976590 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 39.885µs
	I0731 12:27:13.230584  976590 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0731 12:27:13.230591  976590 cache.go:107] acquiring lock: {Name:mkd125df3b689d179b9b201b0832947cfd2d60bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230630  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0731 12:27:13.230639  976590 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 49.149µs
	I0731 12:27:13.230646  976590 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0731 12:27:13.230654  976590 cache.go:107] acquiring lock: {Name:mke882cd5210112ea1849e029dbec868945aa002 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230685  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0731 12:27:13.230689  976590 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.841µs
	I0731 12:27:13.230695  976590 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0731 12:27:13.230705  976590 cache.go:107] acquiring lock: {Name:mkb6b3b7ae10211fe0acf74cf5a7211d1cf0ad48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230735  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0731 12:27:13.230743  976590 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 39.458µs
	I0731 12:27:13.230749  976590 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0731 12:27:13.230758  976590 cache.go:107] acquiring lock: {Name:mk3c4e5d18e2e899b1c2d6a6181210faac3209f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230783  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0731 12:27:13.230791  976590 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 34.175µs
	I0731 12:27:13.230797  976590 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0731 12:27:13.230806  976590 cache.go:107] acquiring lock: {Name:mk2a6eace1209cec1cf976e2e67b1b64311c25ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230834  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0731 12:27:13.230841  976590 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.823µs
	I0731 12:27:13.230847  976590 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0731 12:27:13.230871  976590 cache.go:107] acquiring lock: {Name:mkf48b3f07d5cfc0821d729c87bfd8479281fa5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:27:13.230902  976590 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0731 12:27:13.230910  976590 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 40.418µs
	I0731 12:27:13.230917  976590 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0731 12:27:13.230922  976590 cache.go:87] Successfully saved all images to host disk.
	I0731 12:27:13.249464  976590 fix.go:102] recreateIfNeeded on running-upgrade-307223: state=Running err=<nil>
	W0731 12:27:13.249500  976590 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 12:27:13.253796  976590 out.go:177] * Updating the running docker "running-upgrade-307223" container ...
	I0731 12:27:13.255475  976590 machine.go:88] provisioning docker machine ...
	I0731 12:27:13.255516  976590 ubuntu.go:169] provisioning hostname "running-upgrade-307223"
	I0731 12:27:13.255603  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:13.274323  976590 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:13.274818  976590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36027 <nil> <nil>}
	I0731 12:27:13.274836  976590 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-307223 && echo "running-upgrade-307223" | sudo tee /etc/hostname
	I0731 12:27:13.430830  976590 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-307223
	
	I0731 12:27:13.430945  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:13.451439  976590 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:13.451887  976590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36027 <nil> <nil>}
	I0731 12:27:13.451911  976590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-307223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-307223/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-307223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:27:13.598088  976590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:13.598114  976590 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:27:13.598171  976590 ubuntu.go:177] setting up certificates
	I0731 12:27:13.598181  976590 provision.go:83] configureAuth start
	I0731 12:27:13.598271  976590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-307223
	I0731 12:27:13.623347  976590 provision.go:138] copyHostCerts
	I0731 12:27:13.623414  976590 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:27:13.623443  976590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:27:13.623520  976590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:27:13.623620  976590 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:27:13.623625  976590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:27:13.623655  976590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:27:13.623706  976590 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:27:13.623710  976590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:27:13.623733  976590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:27:13.623774  976590 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-307223 san=[192.168.70.24 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-307223]
	I0731 12:27:14.239451  976590 provision.go:172] copyRemoteCerts
	I0731 12:27:14.239533  976590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:27:14.239584  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:14.259552  976590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36027 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/running-upgrade-307223/id_rsa Username:docker}
	I0731 12:27:14.367708  976590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:27:14.393284  976590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 12:27:14.418643  976590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:27:14.444780  976590 provision.go:86] duration metric: configureAuth took 846.584948ms
	I0731 12:27:14.444849  976590 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:27:14.445057  976590 config.go:182] Loaded profile config "running-upgrade-307223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:27:14.445179  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:14.465017  976590 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:14.465482  976590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36027 <nil> <nil>}
	I0731 12:27:14.465507  976590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:27:15.338260  976590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:27:15.338294  976590 machine.go:91] provisioned docker machine in 2.08278565s
	I0731 12:27:15.338305  976590 start.go:300] post-start starting for "running-upgrade-307223" (driver="docker")
	I0731 12:27:15.338315  976590 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:27:15.338394  976590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:27:15.338436  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:15.369338  976590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36027 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/running-upgrade-307223/id_rsa Username:docker}
	I0731 12:27:15.480658  976590 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:27:15.485806  976590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:27:15.485881  976590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:27:15.485907  976590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:27:15.485929  976590 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0731 12:27:15.485969  976590 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:27:15.486054  976590 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:27:15.486187  976590 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:27:15.486350  976590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:27:15.501205  976590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:27:15.535920  976590 start.go:303] post-start completed in 197.599347ms
	I0731 12:27:15.536093  976590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:27:15.536194  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:15.576371  976590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36027 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/running-upgrade-307223/id_rsa Username:docker}
	I0731 12:27:15.712933  976590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:27:15.741979  976590 fix.go:56] fixHost completed within 2.512070482s
	I0731 12:27:15.741999  976590 start.go:83] releasing machines lock for "running-upgrade-307223", held for 2.512111606s
	I0731 12:27:15.742082  976590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-307223
	I0731 12:27:15.777669  976590 ssh_runner.go:195] Run: cat /version.json
	I0731 12:27:15.777742  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:15.777978  976590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:27:15.778049  976590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-307223
	I0731 12:27:15.841181  976590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36027 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/running-upgrade-307223/id_rsa Username:docker}
	I0731 12:27:15.841471  976590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36027 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/running-upgrade-307223/id_rsa Username:docker}
	W0731 12:27:15.985638  976590 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:27:15.985719  976590 ssh_runner.go:195] Run: systemctl --version
	I0731 12:27:16.131344  976590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:27:16.306113  976590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:27:16.311674  976590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:27:16.337137  976590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:27:16.337220  976590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:27:16.370764  976590 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:27:16.370791  976590 start.go:466] detecting cgroup driver to use...
	I0731 12:27:16.370826  976590 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:27:16.370890  976590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:27:16.407674  976590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:16.426339  976590 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:27:16.426414  976590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:27:16.438967  976590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:27:16.453443  976590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 12:27:16.469295  976590 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 12:27:16.469381  976590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:27:16.643888  976590 docker.go:212] disabling docker service ...
	I0731 12:27:16.643965  976590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:27:16.658414  976590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:27:16.673335  976590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:27:16.822472  976590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:27:16.983297  976590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:27:16.997090  976590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:17.018663  976590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 12:27:17.018785  976590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:27:17.033133  976590 out.go:177] 
	W0731 12:27:17.034802  976590 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0731 12:27:17.034822  976590 out.go:239] * 
	* 
	W0731 12:27:17.036446  976590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:27:17.039891  976590 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-307223 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-31 12:27:17.066979196 +0000 UTC m=+2385.819084546
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-307223
helpers_test.go:235: (dbg) docker inspect running-upgrade-307223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da",
	        "Created": "2023-07-31T12:26:23.640008937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 973081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T12:26:24.108214954Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da/hostname",
	        "HostsPath": "/var/lib/docker/containers/5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da/hosts",
	        "LogPath": "/var/lib/docker/containers/5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da/5316eeef90bdc8ee96efe4edbd709f9653ab57434f59e8553cea768f3e7064da-json.log",
	        "Name": "/running-upgrade-307223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-307223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-307223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eac096e3bd348212d5d2503081c4abd703ba6b70961acaa6516ea6167bdc9c24-init/diff:/var/lib/docker/overlay2/87bcb4f9172bcb90a131e225eb5d2bbc19f47a9398d1baca947676f551b64c13/diff:/var/lib/docker/overlay2/1b0cfe6a16f74c38c93c5cb35262351b2ff99281b3b0ea7667c8fc00ae4e85e6/diff:/var/lib/docker/overlay2/e63c1472daa0332073b1f8579b42c038035c7b49abe05b5db82c77b278d3929a/diff:/var/lib/docker/overlay2/0984f3a3c5fe3450108dbd8d6cb63ad68561c71b47ac047fce1dc5fbab981cb2/diff:/var/lib/docker/overlay2/d831a034cc61e642ad0da86ac6d095acca930a9ea30c249bcbca80ef1129cdc9/diff:/var/lib/docker/overlay2/645731f36595da713fcd4758ab6f72d68409df9dade33794403be1ef087df6c2/diff:/var/lib/docker/overlay2/f29b76e0615f0eea558eb139c941d193dabfaf1785174a559053c6b12692f076/diff:/var/lib/docker/overlay2/5fba54e48adfc079060f3b6ae938c8da28df2c386ecdb04a9eb5defa335bf635/diff:/var/lib/docker/overlay2/4120b858ebc1a01cd7e9229afa7e1c60abc7119d6cf70c7a771381857f4a2416/diff:/var/lib/docker/overlay2/15b004
d9bae2f47cc8be9ec6eaa422de5831428964d997e437f73e5ff661301e/diff:/var/lib/docker/overlay2/3c923932c8ec0871538be8cb220242ff20369e92dde083ee1297bef465be0797/diff:/var/lib/docker/overlay2/83fc09829043cd0bff1cfa6c36ee5b9f4ad7f656bddc72f9ce341566e6aa1a21/diff:/var/lib/docker/overlay2/94ff63f747f6f2c9439222f226e8b487b985258e29ee28bd8b5556bae7e69223/diff:/var/lib/docker/overlay2/4ee373c010da2c828223f7fb266ff10145903de18ba618f1b364841f7ec311de/diff:/var/lib/docker/overlay2/57bdc060b0850c5492ae1b95660b9bbe3802530a530c53c72084810a50aa363e/diff:/var/lib/docker/overlay2/e50994e693691912f1d3a05d6baa35740926c631b7a12a96eb3f0c0221e38b1c/diff:/var/lib/docker/overlay2/94dd1bb14068bae005f82127badb45cfd42c3d8d6f8620026ad74cecbdbafb0f/diff:/var/lib/docker/overlay2/44c167efa5d62339a8a78b21c6d3557f1351a96dff555618d9dc5e53a2c32205/diff:/var/lib/docker/overlay2/2939b97b1cbc88d8d4af6055c796bc57c9ad1516b252ab46727e89a065a1b477/diff:/var/lib/docker/overlay2/d58566dc4c11a0d764f15f6db672fe5ac4bbb6ee4a67e92578152db71039b78c/diff:/var/lib/d
ocker/overlay2/df0a3afc3d8464bed670ccb13d8bd53e780840060c25b43d696158a216f0fcc7/diff:/var/lib/docker/overlay2/efbd596bfc9014f5f26482413e7bffd6275267d8ef3db852c9fa1d124a6103ca/diff:/var/lib/docker/overlay2/2b91666c521259502ddda4a7800ec7538f637ad4145581141686e3f4a3f9b2bf/diff:/var/lib/docker/overlay2/52d623b8780c19280d0f5d1ed87dd8dedce1e09743907994a1fe0afa558ac35f/diff:/var/lib/docker/overlay2/7e24b638b16751bdfa204601961a2eb2215439406a12002b63a39261e22dbff5/diff:/var/lib/docker/overlay2/94ff53d9140692587e1ddcff2356a5616974831842242db3e012c7754c3e3b81/diff:/var/lib/docker/overlay2/a80ce6c7dbe4ce16f8de9629325ddf4cc18636b939be68b64251bf41670e43ee/diff:/var/lib/docker/overlay2/c0babc713ce8066e948b79f3504c927d0ec125374127649553e04deae1a11f0c/diff:/var/lib/docker/overlay2/7b1c982905f19bbcef35cf0b428446289ca145a8638f516ee4af3895567b2e7f/diff:/var/lib/docker/overlay2/d188707592fd52d677fef4793151f67a0ae3dca66efe8575f1261ee50cc5e46d/diff:/var/lib/docker/overlay2/87bc32dec091c2bb4546e9cd675af0f3f5422da178ed6b34e927ae36864
8de32/diff:/var/lib/docker/overlay2/8ac1502f7fd9c06cbc1f3e03a6017f5963103b66571fb14068ac1bc134a99429/diff:/var/lib/docker/overlay2/b6fc22497043613fb00a8a2df31c887e5e6edbb0fde6e134f34bc290d5505230/diff:/var/lib/docker/overlay2/8b0f64c33632e92bf2329c66a405e467fcee818f8f65e87f9c5e92a9e2937c3a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eac096e3bd348212d5d2503081c4abd703ba6b70961acaa6516ea6167bdc9c24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eac096e3bd348212d5d2503081c4abd703ba6b70961acaa6516ea6167bdc9c24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eac096e3bd348212d5d2503081c4abd703ba6b70961acaa6516ea6167bdc9c24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-307223",
	                "Source": "/var/lib/docker/volumes/running-upgrade-307223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-307223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-307223",
	                "name.minikube.sigs.k8s.io": "running-upgrade-307223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2120d8e1689ce67beb33d320479c472e7970b5ca746b1a0a3c567ad2e5073d88",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36027"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36026"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36024"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2120d8e1689c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-307223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.24"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5316eeef90bd",
	                        "running-upgrade-307223"
	                    ],
	                    "NetworkID": "e6af1c1ae90d8384001dcce50ea09af9d180dc13f2edc9740c8f0e6eb3cc5bcc",
	                    "EndpointID": "80325e7a9f31c522b7aebee9bd21369f393d10fb878a8759806dc95ceadbf7c5",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.24",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:18",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-307223 -n running-upgrade-307223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-307223 -n running-upgrade-307223: exit status 4 (496.218309ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 12:27:17.479644  977276 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-307223" does not appear in /home/jenkins/minikube-integration/16968-847174/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-307223" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-307223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-307223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-307223: (3.078878606s)
--- FAIL: TestRunningBinaryUpgrade (70.76s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.3787102022.exe start -p missing-upgrade-141478 --memory=2200 --driver=docker  --container-runtime=crio
E0731 12:22:07.876510  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.3787102022.exe start -p missing-upgrade-141478 --memory=2200 --driver=docker  --container-runtime=crio: (2m14.926618465s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-141478
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-141478: (1.883617406s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-141478
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-141478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-141478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (36.606551399s)

                                                
                                                
-- stdout --
	* [missing-upgrade-141478] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-141478 in cluster missing-upgrade-141478
	* Pulling base image ...
	* docker "missing-upgrade-141478" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:55.340544  963180 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:23:55.340708  963180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:55.340716  963180 out.go:309] Setting ErrFile to fd 2...
	I0731 12:23:55.340722  963180 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:55.340998  963180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:23:55.341445  963180 out.go:303] Setting JSON to false
	I0731 12:23:55.342741  963180 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72383,"bootTime":1690733853,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:23:55.342814  963180 start.go:138] virtualization:  
	I0731 12:23:55.346337  963180 out.go:177] * [missing-upgrade-141478] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:23:55.348198  963180 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:23:55.349809  963180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:23:55.348300  963180 notify.go:220] Checking for updates...
	I0731 12:23:55.351749  963180 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:23:55.353438  963180 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:23:55.355217  963180 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:23:55.356847  963180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:23:55.359423  963180 config.go:182] Loaded profile config "missing-upgrade-141478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:23:55.361601  963180 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 12:23:55.363092  963180 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:23:55.392394  963180 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:23:55.392489  963180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:23:55.481792  963180 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-31 12:23:55.47108427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:23:55.481895  963180 docker.go:294] overlay module found
	I0731 12:23:55.483975  963180 out.go:177] * Using the docker driver based on existing profile
	I0731 12:23:55.485698  963180 start.go:298] selected driver: docker
	I0731 12:23:55.485726  963180 start.go:898] validating driver "docker" against &{Name:missing-upgrade-141478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-141478 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:23:55.486369  963180 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:23:55.487074  963180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:23:55.554130  963180 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-31 12:23:55.544437589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:23:55.554453  963180 cni.go:84] Creating CNI manager for ""
	I0731 12:23:55.554466  963180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:23:55.554490  963180 start_flags.go:319] config:
	{Name:missing-upgrade-141478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-141478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:23:55.557701  963180 out.go:177] * Starting control plane node missing-upgrade-141478 in cluster missing-upgrade-141478
	I0731 12:23:55.559185  963180 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:23:55.560953  963180 out.go:177] * Pulling base image ...
	I0731 12:23:55.562666  963180 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0731 12:23:55.562685  963180 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0731 12:23:55.589645  963180 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0731 12:23:55.590184  963180 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0731 12:23:55.590626  963180 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0731 12:23:55.637656  963180 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0731 12:23:55.637919  963180 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/missing-upgrade-141478/config.json ...
	I0731 12:23:55.638415  963180 cache.go:107] acquiring lock: {Name:mkb6b3b7ae10211fe0acf74cf5a7211d1cf0ad48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638415  963180 cache.go:107] acquiring lock: {Name:mkd51221a454e7bc0392003b0a5b9da46fc265f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638563  963180 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:23:55.638566  963180 cache.go:107] acquiring lock: {Name:mk3c4e5d18e2e899b1c2d6a6181210faac3209f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638589  963180 cache.go:107] acquiring lock: {Name:mk5fb93024f08d45bee9e1431aaeb4cf2540c7db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638745  963180 cache.go:107] acquiring lock: {Name:mk2a6eace1209cec1cf976e2e67b1b64311c25ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638753  963180 cache.go:107] acquiring lock: {Name:mkd125df3b689d179b9b201b0832947cfd2d60bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638968  963180 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 12:23:55.639064  963180 cache.go:107] acquiring lock: {Name:mke882cd5210112ea1849e029dbec868945aa002 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.639159  963180 cache.go:107] acquiring lock: {Name:mkf48b3f07d5cfc0821d729c87bfd8479281fa5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:23:55.638575  963180 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 168.804µs
	I0731 12:23:55.639377  963180 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:23:55.639381  963180 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0731 12:23:55.639542  963180 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0731 12:23:55.639618  963180 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 12:23:55.640200  963180 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0731 12:23:55.640816  963180 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 12:23:55.641184  963180 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0731 12:23:55.641489  963180 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 12:23:55.641779  963180 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0731 12:23:55.641932  963180 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0731 12:23:55.642346  963180 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 12:23:55.642707  963180 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0731 12:23:55.642596  963180 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0731 12:23:55.643710  963180 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 12:23:56.064637  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0731 12:23:56.068086  963180 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0731 12:23:56.068254  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0731 12:23:56.076231  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0731 12:23:56.078408  963180 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0731 12:23:56.078475  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W0731 12:23:56.078473  963180 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0731 12:23:56.078564  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0731 12:23:56.101184  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0731 12:23:56.103728  963180 cache.go:162] opening:  /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0731 12:23:56.224608  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0731 12:23:56.224633  963180 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 586.06846ms
	I0731 12:23:56.224645  963180 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0731 12:23:56.430488  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0731 12:23:56.430515  963180 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 791.467173ms
	I0731 12:23:56.430529  963180 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  449.34 KiB / 287.99 MiB [] 0.15% ? p/s ?I0731 12:23:56.479935  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0731 12:23:56.479956  963180 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 840.800929ms
	I0731 12:23:56.479969  963180 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0731 12:23:56.650864  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0731 12:23:56.650902  963180 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.012315094s
	I0731 12:23:56.650915  963180 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  11.60 MiB / 287.99 MiB [>] 4.03% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.19 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.19 MiB I0731 12:23:57.205884  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0731 12:23:57.205916  963180 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.567647474s
	I0731 12:23:57.205930  963180 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.19 MiB I0731 12:23:57.291563  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0731 12:23:57.291588  963180 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.652838882s
	I0731 12:23:57.291602  963180 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  26.02 MiB / 287.99 MiB  9.03% 40.41 MiB     > gcr.io/k8s-minikube/kicbase...:  40.06 MiB / 287.99 MiB  13.91% 40.41 MiB    > gcr.io/k8s-minikube/kicbase...:  43.92 MiB / 287.99 MiB  15.25% 40.41 MiB    > gcr.io/k8s-minikube/kicbase...:  60.65 MiB / 287.99 MiB  21.06% 41.54 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 41.54 MiBI0731 12:23:58.404897  963180 cache.go:157] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0731 12:23:58.404931  963180 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 2.766189217s
	I0731 12:23:58.404945  963180 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0731 12:23:58.404976  963180 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  80.45 MiB / 287.99 MiB  27.93% 41.54 MiB    > gcr.io/k8s-minikube/kicbase...:  100.12 MiB / 287.99 MiB  34.77% 43.10 Mi    > gcr.io/k8s-minikube/kicbase...:  121.14 MiB / 287.99 MiB  42.06% 43.10 Mi    > gcr.io/k8s-minikube/kicbase...:  140.01 MiB / 287.99 MiB  48.62% 43.10 Mi    > gcr.io/k8s-minikube/kicbase...:  161.95 MiB / 287.99 MiB  56.23% 46.97 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 46.97 Mi    > gcr.io/k8s-minikube/kicbase...:  184.48 MiB / 287.99 MiB  64.06% 46.97 Mi    > gcr.io/k8s-minikube/kicbase...:  207.49 MiB / 287.99 MiB  72.05% 48.83 Mi    > gcr.io/k8s-minikube/kicbase...:  210.32 MiB / 287.99 MiB  73.03% 48.83 Mi    > gcr.io/k8s-minikube/kicbase...:  224.90 MiB / 287.99 MiB  78.09% 48.83 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 48.97 Mi    > gcr.io/k8s-minikube/kicbase...:  246.06 MiB / 287.99 MiB  85.44% 48.97 Mi    > gcr.io/k8s-minikube/kicbase...:  262.29 MiB / 287.99 MiB  91.
08% 48.97 Mi    > gcr.io/k8s-minikube/kicbase...:  265.10 MiB / 287.99 MiB  92.05% 48.72 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.59% 48.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.03 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.03 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.03 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 44.93 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 44.93 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.93 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 42.04 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 42.04 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 42.21 MI0731 12:24:03.088150  963180 cache.go:153] successfully saved g
cr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0731 12:24:03.088161  963180 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0731 12:24:04.565152  963180 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0731 12:24:04.565191  963180 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:24:04.565240  963180 start.go:365] acquiring machines lock for missing-upgrade-141478: {Name:mk437b4804877d31005923c9505234c24166ce44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:04.565303  963180 start.go:369] acquired machines lock for "missing-upgrade-141478" in 39.68µs
	I0731 12:24:04.565327  963180 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:24:04.565336  963180 fix.go:54] fixHost starting: 
	I0731 12:24:04.565594  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:04.595539  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:04.595596  963180 fix.go:102] recreateIfNeeded on missing-upgrade-141478: state= err=unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:04.595616  963180 fix.go:107] machineExists: false. err=machine does not exist
	I0731 12:24:04.598573  963180 out.go:177] * docker "missing-upgrade-141478" container is missing, will recreate.
	I0731 12:24:04.600559  963180 delete.go:124] DEMOLISHING missing-upgrade-141478 ...
	I0731 12:24:04.601199  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:04.643320  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	W0731 12:24:04.643378  963180 stop.go:75] unable to get state: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:04.643399  963180 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:04.643820  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:04.669737  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:04.669801  963180 delete.go:82] Unable to get host status for missing-upgrade-141478, assuming it has already been deleted: state: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:04.669869  963180 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-141478
	W0731 12:24:04.692033  963180 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-141478 returned with exit code 1
	I0731 12:24:04.692068  963180 kic.go:367] could not find the container missing-upgrade-141478 to remove it. will try anyways
	I0731 12:24:04.692186  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:04.711855  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	W0731 12:24:04.711923  963180 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:04.711983  963180 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-141478 /bin/bash -c "sudo init 0"
	W0731 12:24:04.740952  963180 cli_runner.go:211] docker exec --privileged -t missing-upgrade-141478 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 12:24:04.740989  963180 oci.go:647] error shutdown missing-upgrade-141478: docker exec --privileged -t missing-upgrade-141478 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:05.741182  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:05.764452  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:05.764638  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:05.764652  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:05.764682  963180 retry.go:31] will retry after 475.237959ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:06.240210  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:06.257951  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:06.258018  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:06.258029  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:06.258055  963180 retry.go:31] will retry after 413.831388ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:06.672713  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:06.691023  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:06.691084  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:06.691095  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:06.691120  963180 retry.go:31] will retry after 595.315458ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:07.286636  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:07.310550  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:07.310612  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:07.310621  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:07.310645  963180 retry.go:31] will retry after 1.42364571s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:08.735215  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:08.764160  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:08.764218  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:08.764228  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:08.764252  963180 retry.go:31] will retry after 3.530987217s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:12.296350  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:12.316963  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:12.317021  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:12.317030  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:12.317055  963180 retry.go:31] will retry after 4.188927279s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:16.506664  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:16.528312  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:16.528390  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:16.528404  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:16.528432  963180 retry.go:31] will retry after 4.311695051s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:20.840320  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:20.865889  963180 cli_runner.go:211] docker container inspect missing-upgrade-141478 --format={{.State.Status}} returned with exit code 1
	I0731 12:24:20.865948  963180 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	I0731 12:24:20.865958  963180 oci.go:661] temporary error: container missing-upgrade-141478 status is  but expect it to be exited
	I0731 12:24:20.865993  963180 oci.go:88] couldn't shut down missing-upgrade-141478 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-141478": docker container inspect missing-upgrade-141478 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141478
	 
	I0731 12:24:20.866055  963180 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-141478
	I0731 12:24:20.888008  963180 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-141478
	W0731 12:24:20.906869  963180 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-141478 returned with exit code 1
	I0731 12:24:20.906955  963180 cli_runner.go:164] Run: docker network inspect missing-upgrade-141478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:24:20.926331  963180 cli_runner.go:164] Run: docker network rm missing-upgrade-141478
	I0731 12:24:21.031039  963180 fix.go:114] Sleeping 1 second for extra luck!
	I0731 12:24:22.031193  963180 start.go:125] createHost starting for "" (driver="docker")
	I0731 12:24:22.033683  963180 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 12:24:22.033865  963180 start.go:159] libmachine.API.Create for "missing-upgrade-141478" (driver="docker")
	I0731 12:24:22.033882  963180 client.go:168] LocalClient.Create starting
	I0731 12:24:22.033955  963180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 12:24:22.033995  963180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:24:22.034014  963180 main.go:141] libmachine: Parsing certificate...
	I0731 12:24:22.034072  963180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 12:24:22.034090  963180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:24:22.034100  963180 main.go:141] libmachine: Parsing certificate...
	I0731 12:24:22.034347  963180 cli_runner.go:164] Run: docker network inspect missing-upgrade-141478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 12:24:22.053912  963180 cli_runner.go:211] docker network inspect missing-upgrade-141478 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 12:24:22.054001  963180 network_create.go:281] running [docker network inspect missing-upgrade-141478] to gather additional debugging logs...
	I0731 12:24:22.054019  963180 cli_runner.go:164] Run: docker network inspect missing-upgrade-141478
	W0731 12:24:22.076604  963180 cli_runner.go:211] docker network inspect missing-upgrade-141478 returned with exit code 1
	I0731 12:24:22.076632  963180 network_create.go:284] error running [docker network inspect missing-upgrade-141478]: docker network inspect missing-upgrade-141478: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-141478 not found
	I0731 12:24:22.076652  963180 network_create.go:286] output of [docker network inspect missing-upgrade-141478]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-141478 not found
	
	** /stderr **
	I0731 12:24:22.076740  963180 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:24:22.104323  963180 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-613e9d6d9aa3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:95:dc:f7:db} reservation:<nil>}
	I0731 12:24:22.104685  963180 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3cd2f3d254c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:66:fd:3b:71} reservation:<nil>}
	I0731 12:24:22.105248  963180 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-60a02b2b0e7d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:69:85:1d:a6} reservation:<nil>}
	I0731 12:24:22.105880  963180 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000ede5e0}
	I0731 12:24:22.105940  963180 network_create.go:123] attempt to create docker network missing-upgrade-141478 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0731 12:24:22.106048  963180 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-141478 missing-upgrade-141478
	I0731 12:24:22.213698  963180 network_create.go:107] docker network missing-upgrade-141478 192.168.76.0/24 created
	I0731 12:24:22.213726  963180 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-141478" container
	I0731 12:24:22.213798  963180 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 12:24:22.233664  963180 cli_runner.go:164] Run: docker volume create missing-upgrade-141478 --label name.minikube.sigs.k8s.io=missing-upgrade-141478 --label created_by.minikube.sigs.k8s.io=true
	I0731 12:24:22.251500  963180 oci.go:103] Successfully created a docker volume missing-upgrade-141478
	I0731 12:24:22.251590  963180 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-141478-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-141478 --entrypoint /usr/bin/test -v missing-upgrade-141478:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0731 12:24:23.063543  963180 oci.go:107] Successfully prepared a docker volume missing-upgrade-141478
	I0731 12:24:23.063569  963180 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0731 12:24:23.063727  963180 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 12:24:23.063835  963180 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 12:24:23.180229  963180 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-141478 --name missing-upgrade-141478 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-141478 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-141478 --network missing-upgrade-141478 --ip 192.168.76.2 --volume missing-upgrade-141478:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0731 12:24:23.603551  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Running}}
	I0731 12:24:23.647549  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	I0731 12:24:23.683610  963180 cli_runner.go:164] Run: docker exec missing-upgrade-141478 stat /var/lib/dpkg/alternatives/iptables
	I0731 12:24:23.792442  963180 oci.go:144] the created container "missing-upgrade-141478" has a running status.
	I0731 12:24:23.792475  963180 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa...
	I0731 12:24:24.402815  963180 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 12:24:24.446601  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	I0731 12:24:24.472232  963180 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 12:24:24.472252  963180 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-141478 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 12:24:24.621152  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	I0731 12:24:24.670050  963180 machine.go:88] provisioning docker machine ...
	I0731 12:24:24.670098  963180 ubuntu.go:169] provisioning hostname "missing-upgrade-141478"
	I0731 12:24:24.670164  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:24.693909  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:24.694387  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:24.694410  963180 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-141478 && echo "missing-upgrade-141478" | sudo tee /etc/hostname
	I0731 12:24:24.695150  963180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0731 12:24:27.878172  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-141478
	
	I0731 12:24:27.878310  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:27.907806  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:27.908420  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:27.908442  963180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-141478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-141478/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-141478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:24:28.053699  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:24:28.053771  963180 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:24:28.053820  963180 ubuntu.go:177] setting up certificates
	I0731 12:24:28.053854  963180 provision.go:83] configureAuth start
	I0731 12:24:28.053960  963180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141478
	I0731 12:24:28.087013  963180 provision.go:138] copyHostCerts
	I0731 12:24:28.087082  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:24:28.087092  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:24:28.087171  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:24:28.087257  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:24:28.087262  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:24:28.087293  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:24:28.087353  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:24:28.087357  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:24:28.087381  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:24:28.087426  963180 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-141478 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-141478]
	I0731 12:24:28.559175  963180 provision.go:172] copyRemoteCerts
	I0731 12:24:28.559302  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:24:28.559372  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:28.578610  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:28.678425  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:24:28.705211  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 12:24:28.731585  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:24:28.757578  963180 provision.go:86] duration metric: configureAuth took 703.689638ms
	I0731 12:24:28.757642  963180 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:24:28.757862  963180 config.go:182] Loaded profile config "missing-upgrade-141478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:24:28.758005  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:28.787942  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:28.788426  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:28.788443  963180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:24:29.216556  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:24:29.216581  963180 machine.go:91] provisioned docker machine in 4.546498169s
	I0731 12:24:29.216600  963180 client.go:171] LocalClient.Create took 7.182712237s
	I0731 12:24:29.216613  963180 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-141478" took 7.182749209s
	I0731 12:24:29.216623  963180 start.go:300] post-start starting for "missing-upgrade-141478" (driver="docker")
	I0731 12:24:29.216634  963180 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:24:29.216706  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:24:29.216752  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:29.236238  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:29.337477  963180 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:24:29.341695  963180 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:24:29.341722  963180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:24:29.341733  963180 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:24:29.341740  963180 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0731 12:24:29.341749  963180 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:24:29.341807  963180 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:24:29.341904  963180 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:24:29.342010  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:24:29.350881  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:24:29.375792  963180 start.go:303] post-start completed in 159.152007ms
	I0731 12:24:29.376229  963180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141478
	I0731 12:24:29.397878  963180 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/missing-upgrade-141478/config.json ...
	I0731 12:24:29.398168  963180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:24:29.398220  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:29.416484  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:29.515096  963180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:24:29.520948  963180 start.go:128] duration metric: createHost completed in 7.489718439s
	I0731 12:24:29.521051  963180 cli_runner.go:164] Run: docker container inspect missing-upgrade-141478 --format={{.State.Status}}
	W0731 12:24:29.545826  963180 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 12:24:29.545857  963180 machine.go:88] provisioning docker machine ...
	I0731 12:24:29.545875  963180 ubuntu.go:169] provisioning hostname "missing-upgrade-141478"
	I0731 12:24:29.545949  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:29.569532  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:29.569987  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:29.570009  963180 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-141478 && echo "missing-upgrade-141478" | sudo tee /etc/hostname
	I0731 12:24:29.723572  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-141478
	
	I0731 12:24:29.723665  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:29.742939  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:29.743384  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:29.743410  963180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-141478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-141478/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-141478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:24:29.885281  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:24:29.885308  963180 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:24:29.885342  963180 ubuntu.go:177] setting up certificates
	I0731 12:24:29.885357  963180 provision.go:83] configureAuth start
	I0731 12:24:29.885426  963180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141478
	I0731 12:24:29.904078  963180 provision.go:138] copyHostCerts
	I0731 12:24:29.904166  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:24:29.904175  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:24:29.904251  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:24:29.904758  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:24:29.904774  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:24:29.904820  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:24:29.904943  963180 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:24:29.904951  963180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:24:29.904979  963180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:24:29.905035  963180 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-141478 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-141478]
	I0731 12:24:30.151761  963180 provision.go:172] copyRemoteCerts
	I0731 12:24:30.151855  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:24:30.151927  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:30.174446  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:30.274126  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:24:30.298062  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 12:24:30.321592  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:24:30.346737  963180 provision.go:86] duration metric: configureAuth took 461.364683ms
	I0731 12:24:30.346770  963180 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:24:30.347008  963180 config.go:182] Loaded profile config "missing-upgrade-141478": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:24:30.347159  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:30.365369  963180 main.go:141] libmachine: Using SSH client type: native
	I0731 12:24:30.365813  963180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36015 <nil> <nil>}
	I0731 12:24:30.365846  963180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:24:30.685939  963180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:24:30.685963  963180 machine.go:91] provisioned docker machine in 1.140098486s
	I0731 12:24:30.685974  963180 start.go:300] post-start starting for "missing-upgrade-141478" (driver="docker")
	I0731 12:24:30.685985  963180 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:24:30.686054  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:24:30.686098  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:30.705289  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:30.809409  963180 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:24:30.813542  963180 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:24:30.813605  963180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:24:30.813650  963180 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:24:30.813657  963180 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0731 12:24:30.813667  963180 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:24:30.813753  963180 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:24:30.813835  963180 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:24:30.813944  963180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:24:30.822860  963180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:24:30.846636  963180 start.go:303] post-start completed in 160.645381ms
	I0731 12:24:30.846716  963180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:24:30.846763  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:30.865409  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:30.962232  963180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:24:30.968859  963180 fix.go:56] fixHost completed within 26.403512927s
	I0731 12:24:30.968883  963180 start.go:83] releasing machines lock for "missing-upgrade-141478", held for 26.403567516s
	I0731 12:24:30.968982  963180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141478
	I0731 12:24:30.994232  963180 ssh_runner.go:195] Run: cat /version.json
	I0731 12:24:30.994294  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:30.994555  963180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:24:30.994616  963180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141478
	I0731 12:24:31.021657  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	I0731 12:24:31.024193  963180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36015 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/missing-upgrade-141478/id_rsa Username:docker}
	W0731 12:24:31.117027  963180 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:24:31.117148  963180 ssh_runner.go:195] Run: systemctl --version
	I0731 12:24:31.261470  963180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:24:31.350379  963180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:24:31.356270  963180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:24:31.381855  963180 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:24:31.381982  963180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:24:31.414870  963180 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:24:31.414944  963180 start.go:466] detecting cgroup driver to use...
	I0731 12:24:31.414990  963180 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:24:31.415066  963180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:24:31.444013  963180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:24:31.456670  963180 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:24:31.456737  963180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:24:31.469576  963180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:24:31.482088  963180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 12:24:31.496193  963180 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 12:24:31.496277  963180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:24:31.599405  963180 docker.go:212] disabling docker service ...
	I0731 12:24:31.599486  963180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:24:31.613451  963180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:24:31.625520  963180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:24:31.727835  963180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:24:31.839418  963180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:24:31.852146  963180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:24:31.869071  963180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 12:24:31.869179  963180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:24:31.883620  963180 out.go:177] 
	W0731 12:24:31.885552  963180 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0731 12:24:31.885577  963180 out.go:239] * 
	* 
	W0731 12:24:31.886481  963180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:24:31.888404  963180 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-141478 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-31 12:24:31.935580872 +0000 UTC m=+2220.687686214
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-141478
helpers_test.go:235: (dbg) docker inspect missing-upgrade-141478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e",
	        "Created": "2023-07-31T12:24:23.219119548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 965257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T12:24:23.592326371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e/hostname",
	        "HostsPath": "/var/lib/docker/containers/66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e/hosts",
	        "LogPath": "/var/lib/docker/containers/66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e/66bf38cc9af74bb08e54994c405f07c1152dc84bbf26ea4872f73be63b2a7e9e-json.log",
	        "Name": "/missing-upgrade-141478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-141478:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-141478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46ef9ccbcbb6ac09de58bf94e18f5c71a902b3c5a46baada7f52556dcbf15a7a-init/diff:/var/lib/docker/overlay2/87bcb4f9172bcb90a131e225eb5d2bbc19f47a9398d1baca947676f551b64c13/diff:/var/lib/docker/overlay2/1b0cfe6a16f74c38c93c5cb35262351b2ff99281b3b0ea7667c8fc00ae4e85e6/diff:/var/lib/docker/overlay2/e63c1472daa0332073b1f8579b42c038035c7b49abe05b5db82c77b278d3929a/diff:/var/lib/docker/overlay2/0984f3a3c5fe3450108dbd8d6cb63ad68561c71b47ac047fce1dc5fbab981cb2/diff:/var/lib/docker/overlay2/d831a034cc61e642ad0da86ac6d095acca930a9ea30c249bcbca80ef1129cdc9/diff:/var/lib/docker/overlay2/645731f36595da713fcd4758ab6f72d68409df9dade33794403be1ef087df6c2/diff:/var/lib/docker/overlay2/f29b76e0615f0eea558eb139c941d193dabfaf1785174a559053c6b12692f076/diff:/var/lib/docker/overlay2/5fba54e48adfc079060f3b6ae938c8da28df2c386ecdb04a9eb5defa335bf635/diff:/var/lib/docker/overlay2/4120b858ebc1a01cd7e9229afa7e1c60abc7119d6cf70c7a771381857f4a2416/diff:/var/lib/docker/overlay2/15b004
d9bae2f47cc8be9ec6eaa422de5831428964d997e437f73e5ff661301e/diff:/var/lib/docker/overlay2/3c923932c8ec0871538be8cb220242ff20369e92dde083ee1297bef465be0797/diff:/var/lib/docker/overlay2/83fc09829043cd0bff1cfa6c36ee5b9f4ad7f656bddc72f9ce341566e6aa1a21/diff:/var/lib/docker/overlay2/94ff63f747f6f2c9439222f226e8b487b985258e29ee28bd8b5556bae7e69223/diff:/var/lib/docker/overlay2/4ee373c010da2c828223f7fb266ff10145903de18ba618f1b364841f7ec311de/diff:/var/lib/docker/overlay2/57bdc060b0850c5492ae1b95660b9bbe3802530a530c53c72084810a50aa363e/diff:/var/lib/docker/overlay2/e50994e693691912f1d3a05d6baa35740926c631b7a12a96eb3f0c0221e38b1c/diff:/var/lib/docker/overlay2/94dd1bb14068bae005f82127badb45cfd42c3d8d6f8620026ad74cecbdbafb0f/diff:/var/lib/docker/overlay2/44c167efa5d62339a8a78b21c6d3557f1351a96dff555618d9dc5e53a2c32205/diff:/var/lib/docker/overlay2/2939b97b1cbc88d8d4af6055c796bc57c9ad1516b252ab46727e89a065a1b477/diff:/var/lib/docker/overlay2/d58566dc4c11a0d764f15f6db672fe5ac4bbb6ee4a67e92578152db71039b78c/diff:/var/lib/d
ocker/overlay2/df0a3afc3d8464bed670ccb13d8bd53e780840060c25b43d696158a216f0fcc7/diff:/var/lib/docker/overlay2/efbd596bfc9014f5f26482413e7bffd6275267d8ef3db852c9fa1d124a6103ca/diff:/var/lib/docker/overlay2/2b91666c521259502ddda4a7800ec7538f637ad4145581141686e3f4a3f9b2bf/diff:/var/lib/docker/overlay2/52d623b8780c19280d0f5d1ed87dd8dedce1e09743907994a1fe0afa558ac35f/diff:/var/lib/docker/overlay2/7e24b638b16751bdfa204601961a2eb2215439406a12002b63a39261e22dbff5/diff:/var/lib/docker/overlay2/94ff53d9140692587e1ddcff2356a5616974831842242db3e012c7754c3e3b81/diff:/var/lib/docker/overlay2/a80ce6c7dbe4ce16f8de9629325ddf4cc18636b939be68b64251bf41670e43ee/diff:/var/lib/docker/overlay2/c0babc713ce8066e948b79f3504c927d0ec125374127649553e04deae1a11f0c/diff:/var/lib/docker/overlay2/7b1c982905f19bbcef35cf0b428446289ca145a8638f516ee4af3895567b2e7f/diff:/var/lib/docker/overlay2/d188707592fd52d677fef4793151f67a0ae3dca66efe8575f1261ee50cc5e46d/diff:/var/lib/docker/overlay2/87bc32dec091c2bb4546e9cd675af0f3f5422da178ed6b34e927ae36864
8de32/diff:/var/lib/docker/overlay2/8ac1502f7fd9c06cbc1f3e03a6017f5963103b66571fb14068ac1bc134a99429/diff:/var/lib/docker/overlay2/b6fc22497043613fb00a8a2df31c887e5e6edbb0fde6e134f34bc290d5505230/diff:/var/lib/docker/overlay2/8b0f64c33632e92bf2329c66a405e467fcee818f8f65e87f9c5e92a9e2937c3a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46ef9ccbcbb6ac09de58bf94e18f5c71a902b3c5a46baada7f52556dcbf15a7a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46ef9ccbcbb6ac09de58bf94e18f5c71a902b3c5a46baada7f52556dcbf15a7a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46ef9ccbcbb6ac09de58bf94e18f5c71a902b3c5a46baada7f52556dcbf15a7a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-141478",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-141478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-141478",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-141478",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-141478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1db8cdcca2f08a8b20abebe2bc980d42527baa44120af0c3698e193d37236917",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36015"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36011"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36013"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36012"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1db8cdcca2f0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-141478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "66bf38cc9af7",
	                        "missing-upgrade-141478"
	                    ],
	                    "NetworkID": "1c4174ccec2ce26d633a20e86da833cd5e1da4dcb640b1bcd654c608ec4f51ec",
	                    "EndpointID": "0af9488cc1037d11ed69db0916fcace0b431c6c7790ae24954d87be7e62f1746",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-141478 -n missing-upgrade-141478
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-141478 -n missing-upgrade-141478: exit status 6 (315.953874ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 12:24:32.257549  966382 status.go:415] kubeconfig endpoint: got: 192.168.59.167:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-141478" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-141478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-141478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-141478: (1.897212189s)
--- FAIL: TestMissingContainerUpgrade (176.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.42493614.exe start -p stopped-upgrade-379049 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0731 12:25:24.299926  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.42493614.exe start -p stopped-upgrade-379049 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.406154838s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.42493614.exe -p stopped-upgrade-379049 stop
E0731 12:25:44.829430  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.42493614.exe -p stopped-upgrade-379049 stop: (20.619690903s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-379049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-379049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.5025881s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-379049] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-379049 in cluster stopped-upgrade-379049
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-379049" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:59.376076  970699 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:25:59.376793  970699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:59.376823  970699 out.go:309] Setting ErrFile to fd 2...
	I0731 12:25:59.376843  970699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:59.377143  970699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:25:59.377561  970699 out.go:303] Setting JSON to false
	I0731 12:25:59.379886  970699 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72507,"bootTime":1690733853,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:25:59.379996  970699 start.go:138] virtualization:  
	I0731 12:25:59.382603  970699 out.go:177] * [stopped-upgrade-379049] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:25:59.384932  970699 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0731 12:25:59.402523  970699 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:25:59.399699  970699 notify.go:220] Checking for updates...
	I0731 12:25:59.407149  970699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:59.408924  970699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:25:59.411032  970699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:25:59.413233  970699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:25:59.415608  970699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:59.418992  970699 config.go:182] Loaded profile config "stopped-upgrade-379049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:25:59.421148  970699 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 12:25:59.423110  970699 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:25:59.467921  970699 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:25:59.468019  970699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:25:59.556772  970699 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0731 12:25:59.583308  970699 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:25:59.57336431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:25:59.583413  970699 docker.go:294] overlay module found
	I0731 12:25:59.585944  970699 out.go:177] * Using the docker driver based on existing profile
	I0731 12:25:59.587518  970699 start.go:298] selected driver: docker
	I0731 12:25:59.587541  970699 start.go:898] validating driver "docker" against &{Name:stopped-upgrade-379049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-379049 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.170 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:25:59.587651  970699 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:59.588433  970699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:25:59.664508  970699 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:25:59.654529635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:25:59.664824  970699 cni.go:84] Creating CNI manager for ""
	I0731 12:25:59.664842  970699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:25:59.664852  970699 start_flags.go:319] config:
	{Name:stopped-upgrade-379049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-379049 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.170 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:25:59.667043  970699 out.go:177] * Starting control plane node stopped-upgrade-379049 in cluster stopped-upgrade-379049
	I0731 12:25:59.668773  970699 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:25:59.670327  970699 out.go:177] * Pulling base image ...
	I0731 12:25:59.671883  970699 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0731 12:25:59.672046  970699 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0731 12:25:59.690443  970699 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0731 12:25:59.690470  970699 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0731 12:25:59.747432  970699 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0731 12:25:59.747600  970699 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/stopped-upgrade-379049/config.json ...
	I0731 12:25:59.747720  970699 cache.go:107] acquiring lock: {Name:mkd51221a454e7bc0392003b0a5b9da46fc265f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.747803  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:25:59.747812  970699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.463µs
	I0731 12:25:59.747820  970699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:25:59.747830  970699 cache.go:107] acquiring lock: {Name:mk5fb93024f08d45bee9e1431aaeb4cf2540c7db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.747859  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0731 12:25:59.747864  970699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.249µs
	I0731 12:25:59.747870  970699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0731 12:25:59.747871  970699 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:25:59.747882  970699 cache.go:107] acquiring lock: {Name:mkd125df3b689d179b9b201b0832947cfd2d60bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.747908  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0731 12:25:59.747913  970699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.91µs
	I0731 12:25:59.747919  970699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0731 12:25:59.747927  970699 cache.go:107] acquiring lock: {Name:mke882cd5210112ea1849e029dbec868945aa002 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.747958  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0731 12:25:59.747963  970699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.447µs
	I0731 12:25:59.747969  970699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0731 12:25:59.747965  970699 start.go:365] acquiring machines lock for stopped-upgrade-379049: {Name:mkd9b1f413f996601a9bfe64f4d1ddd95ed27e6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.747977  970699 cache.go:107] acquiring lock: {Name:mkb6b3b7ae10211fe0acf74cf5a7211d1cf0ad48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.748003  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0731 12:25:59.748008  970699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 31.294µs
	I0731 12:25:59.748008  970699 start.go:369] acquired machines lock for "stopped-upgrade-379049" in 27.282µs
	I0731 12:25:59.748019  970699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0731 12:25:59.748029  970699 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:59.748028  970699 cache.go:107] acquiring lock: {Name:mk3c4e5d18e2e899b1c2d6a6181210faac3209f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.748038  970699 fix.go:54] fixHost starting: 
	I0731 12:25:59.748054  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0731 12:25:59.748059  970699 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.975µs
	I0731 12:25:59.748065  970699 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0731 12:25:59.748085  970699 cache.go:107] acquiring lock: {Name:mk2a6eace1209cec1cf976e2e67b1b64311c25ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.748198  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0731 12:25:59.748208  970699 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 127.188µs
	I0731 12:25:59.748216  970699 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0731 12:25:59.748228  970699 cache.go:107] acquiring lock: {Name:mkf48b3f07d5cfc0821d729c87bfd8479281fa5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:59.748271  970699 cache.go:115] /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0731 12:25:59.748277  970699 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 51.028µs
	I0731 12:25:59.748283  970699 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0731 12:25:59.748288  970699 cache.go:87] Successfully saved all images to host disk.
	I0731 12:25:59.748394  970699 cli_runner.go:164] Run: docker container inspect stopped-upgrade-379049 --format={{.State.Status}}
	I0731 12:25:59.766587  970699 fix.go:102] recreateIfNeeded on stopped-upgrade-379049: state=Stopped err=<nil>
	W0731 12:25:59.766613  970699 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 12:25:59.768704  970699 out.go:177] * Restarting existing docker container for "stopped-upgrade-379049" ...
	I0731 12:25:59.770455  970699 cli_runner.go:164] Run: docker start stopped-upgrade-379049
	I0731 12:26:00.220432  970699 cli_runner.go:164] Run: docker container inspect stopped-upgrade-379049 --format={{.State.Status}}
	I0731 12:26:00.251306  970699 kic.go:426] container "stopped-upgrade-379049" state is running.
	I0731 12:26:00.251780  970699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-379049
	I0731 12:26:00.277132  970699 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/stopped-upgrade-379049/config.json ...
	I0731 12:26:00.277418  970699 machine.go:88] provisioning docker machine ...
	I0731 12:26:00.277455  970699 ubuntu.go:169] provisioning hostname "stopped-upgrade-379049"
	I0731 12:26:00.277527  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:00.312087  970699 main.go:141] libmachine: Using SSH client type: native
	I0731 12:26:00.312613  970699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36023 <nil> <nil>}
	I0731 12:26:00.312633  970699 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-379049 && echo "stopped-upgrade-379049" | sudo tee /etc/hostname
	I0731 12:26:00.313664  970699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0731 12:26:03.478242  970699 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-379049
	
	I0731 12:26:03.478340  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:03.499105  970699 main.go:141] libmachine: Using SSH client type: native
	I0731 12:26:03.499553  970699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36023 <nil> <nil>}
	I0731 12:26:03.499581  970699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-379049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-379049/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-379049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:26:03.649468  970699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:26:03.649495  970699 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:26:03.649516  970699 ubuntu.go:177] setting up certificates
	I0731 12:26:03.649524  970699 provision.go:83] configureAuth start
	I0731 12:26:03.649588  970699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-379049
	I0731 12:26:03.669062  970699 provision.go:138] copyHostCerts
	I0731 12:26:03.669135  970699 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:26:03.669148  970699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:26:03.669235  970699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:26:03.669342  970699 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:26:03.669351  970699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:26:03.669382  970699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:26:03.669439  970699 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:26:03.669447  970699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:26:03.669471  970699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:26:03.669527  970699 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-379049 san=[192.168.59.170 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-379049]
	I0731 12:26:04.765628  970699 provision.go:172] copyRemoteCerts
	I0731 12:26:04.765705  970699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:26:04.765756  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:04.784568  970699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36023 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/stopped-upgrade-379049/id_rsa Username:docker}
	I0731 12:26:04.886045  970699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:26:04.910628  970699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 12:26:04.933939  970699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:26:04.957371  970699 provision.go:86] duration metric: configureAuth took 1.307832692s
	I0731 12:26:04.957397  970699 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:26:04.957590  970699 config.go:182] Loaded profile config "stopped-upgrade-379049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0731 12:26:04.957698  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:04.978002  970699 main.go:141] libmachine: Using SSH client type: native
	I0731 12:26:04.978446  970699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36023 <nil> <nil>}
	I0731 12:26:04.978466  970699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:26:05.432360  970699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:26:05.432384  970699 machine.go:91] provisioned docker machine in 5.154940254s
	I0731 12:26:05.432394  970699 start.go:300] post-start starting for "stopped-upgrade-379049" (driver="docker")
	I0731 12:26:05.432407  970699 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:26:05.432487  970699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:26:05.432533  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:05.463947  970699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36023 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/stopped-upgrade-379049/id_rsa Username:docker}
	I0731 12:26:05.572485  970699 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:26:05.577127  970699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:26:05.577197  970699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:26:05.577223  970699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:26:05.577254  970699 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0731 12:26:05.577282  970699 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:26:05.577360  970699 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:26:05.577466  970699 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:26:05.577605  970699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:26:05.587772  970699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:26:05.615291  970699 start.go:303] post-start completed in 182.88158ms
	I0731 12:26:05.615424  970699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:26:05.615490  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:05.635049  970699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36023 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/stopped-upgrade-379049/id_rsa Username:docker}
	I0731 12:26:05.736020  970699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:26:05.743448  970699 fix.go:56] fixHost completed within 5.99540464s
	I0731 12:26:05.743476  970699 start.go:83] releasing machines lock for "stopped-upgrade-379049", held for 5.995452148s
	I0731 12:26:05.743552  970699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-379049
	I0731 12:26:05.767779  970699 ssh_runner.go:195] Run: cat /version.json
	I0731 12:26:05.767833  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:05.768144  970699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:26:05.768194  970699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-379049
	I0731 12:26:05.796314  970699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36023 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/stopped-upgrade-379049/id_rsa Username:docker}
	I0731 12:26:05.818724  970699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36023 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/stopped-upgrade-379049/id_rsa Username:docker}
	W0731 12:26:05.909846  970699 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:26:05.909933  970699 ssh_runner.go:195] Run: systemctl --version
	I0731 12:26:05.998691  970699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:26:06.118678  970699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:26:06.126842  970699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:26:06.155669  970699 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:26:06.155828  970699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:26:06.207904  970699 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:26:06.207972  970699 start.go:466] detecting cgroup driver to use...
	I0731 12:26:06.208016  970699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:26:06.208095  970699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:26:06.256883  970699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:26:06.279344  970699 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:26:06.279411  970699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:26:06.294242  970699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:26:06.308566  970699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 12:26:06.322896  970699 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 12:26:06.322965  970699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:26:06.483642  970699 docker.go:212] disabling docker service ...
	I0731 12:26:06.483760  970699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:26:06.499088  970699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:26:06.511922  970699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:26:06.619841  970699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:26:06.733302  970699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:26:06.747523  970699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:26:06.765949  970699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 12:26:06.766077  970699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:26:06.779809  970699 out.go:177] 
	W0731 12:26:06.782198  970699 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0731 12:26:06.782220  970699 out.go:239] * 
	* 
	W0731 12:26:06.783229  970699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:26:06.785261  970699 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-379049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (91.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-267284 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-267284 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.267746234s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-267284] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-267284 in cluster pause-267284
	* Pulling base image ...
	* Updating the running docker "pause-267284" container ...
	* Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-267284" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:28:44.803130  983133 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:28:44.803421  983133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:44.803454  983133 out.go:309] Setting ErrFile to fd 2...
	I0731 12:28:44.803486  983133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:44.803833  983133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:28:44.804424  983133 out.go:303] Setting JSON to false
	I0731 12:28:44.805800  983133 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72672,"bootTime":1690733853,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:28:44.805915  983133 start.go:138] virtualization:  
	I0731 12:28:44.808934  983133 out.go:177] * [pause-267284] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:28:44.811766  983133 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:28:44.811871  983133 notify.go:220] Checking for updates...
	I0731 12:28:44.815829  983133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:28:44.818112  983133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:28:44.819929  983133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:28:44.821997  983133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:28:44.823819  983133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:28:44.826068  983133 config.go:182] Loaded profile config "pause-267284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:28:44.826663  983133 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:28:44.855954  983133 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:28:44.856046  983133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:28:45.029702  983133 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-31 12:28:45.016291054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:28:45.029833  983133 docker.go:294] overlay module found
	I0731 12:28:45.035538  983133 out.go:177] * Using the docker driver based on existing profile
	I0731 12:28:45.037970  983133 start.go:298] selected driver: docker
	I0731 12:28:45.037996  983133 start.go:898] validating driver "docker" against &{Name:pause-267284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-267284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provi
sioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:28:45.038173  983133 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:28:45.038339  983133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:28:45.301846  983133 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-31 12:28:45.286839438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:28:45.302514  983133 cni.go:84] Creating CNI manager for ""
	I0731 12:28:45.302532  983133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:28:45.302542  983133 start_flags.go:319] config:
	{Name:pause-267284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-267284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:28:45.304979  983133 out.go:177] * Starting control plane node pause-267284 in cluster pause-267284
	I0731 12:28:45.306931  983133 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:28:45.308609  983133 out.go:177] * Pulling base image ...
	I0731 12:28:45.311207  983133 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:28:45.311281  983133 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 12:28:45.311299  983133 cache.go:57] Caching tarball of preloaded images
	I0731 12:28:45.311348  983133 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 12:28:45.311611  983133 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 12:28:45.311636  983133 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 12:28:45.311809  983133 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/config.json ...
	I0731 12:28:45.357732  983133 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 12:28:45.357765  983133 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 12:28:45.357788  983133 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:28:45.357904  983133 start.go:365] acquiring machines lock for pause-267284: {Name:mk26e17dfeb733154750576c4d7262cfdac1c1dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:28:45.358035  983133 start.go:369] acquired machines lock for "pause-267284" in 48.328µs
	I0731 12:28:45.358061  983133 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:28:45.358070  983133 fix.go:54] fixHost starting: 
	I0731 12:28:45.358453  983133 cli_runner.go:164] Run: docker container inspect pause-267284 --format={{.State.Status}}
	I0731 12:28:45.394100  983133 fix.go:102] recreateIfNeeded on pause-267284: state=Running err=<nil>
	W0731 12:28:45.394133  983133 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 12:28:45.396806  983133 out.go:177] * Updating the running docker "pause-267284" container ...
	I0731 12:28:45.398449  983133 machine.go:88] provisioning docker machine ...
	I0731 12:28:45.398563  983133 ubuntu.go:169] provisioning hostname "pause-267284"
	I0731 12:28:45.398781  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:45.430325  983133 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:45.430792  983133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36032 <nil> <nil>}
	I0731 12:28:45.430805  983133 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-267284 && echo "pause-267284" | sudo tee /etc/hostname
	I0731 12:28:45.605718  983133 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-267284
	
	I0731 12:28:45.605812  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:45.629148  983133 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:45.629586  983133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36032 <nil> <nil>}
	I0731 12:28:45.629610  983133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-267284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-267284/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-267284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:28:45.785829  983133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:28:45.785853  983133 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:28:45.785876  983133 ubuntu.go:177] setting up certificates
	I0731 12:28:45.785885  983133 provision.go:83] configureAuth start
	I0731 12:28:45.785945  983133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-267284
	I0731 12:28:45.807148  983133 provision.go:138] copyHostCerts
	I0731 12:28:45.807229  983133 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:28:45.807261  983133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:28:45.807400  983133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:28:45.807544  983133 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:28:45.807557  983133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:28:45.807601  983133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:28:45.807686  983133 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:28:45.807696  983133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:28:45.807738  983133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:28:45.807798  983133 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.pause-267284 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-267284]
	I0731 12:28:46.292483  983133 provision.go:172] copyRemoteCerts
	I0731 12:28:46.292573  983133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:28:46.292617  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:46.314722  983133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36032 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/pause-267284/id_rsa Username:docker}
	I0731 12:28:46.412627  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:28:46.456924  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:28:46.491613  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 12:28:46.536297  983133 provision.go:86] duration metric: configureAuth took 750.398174ms
	I0731 12:28:46.536332  983133 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:28:46.536590  983133 config.go:182] Loaded profile config "pause-267284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:28:46.536720  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:46.561488  983133 main.go:141] libmachine: Using SSH client type: native
	I0731 12:28:46.561981  983133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36032 <nil> <nil>}
	I0731 12:28:46.562021  983133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:28:52.349893  983133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:28:52.349912  983133 machine.go:91] provisioned docker machine in 6.951436918s
	I0731 12:28:52.349922  983133 start.go:300] post-start starting for "pause-267284" (driver="docker")
	I0731 12:28:52.349932  983133 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:28:52.349995  983133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:28:52.350034  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:52.396496  983133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36032 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/pause-267284/id_rsa Username:docker}
	I0731 12:28:52.533259  983133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:28:52.539551  983133 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:28:52.539585  983133 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:28:52.539598  983133 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:28:52.539605  983133 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 12:28:52.539615  983133 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:28:52.539698  983133 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:28:52.539784  983133 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:28:52.539891  983133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:28:52.561139  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:28:52.621998  983133 start.go:303] post-start completed in 272.050285ms
	I0731 12:28:52.622132  983133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:28:52.622198  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:52.662675  983133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36032 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/pause-267284/id_rsa Username:docker}
	I0731 12:28:52.776774  983133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:28:52.793731  983133 fix.go:56] fixHost completed within 7.435652565s
	I0731 12:28:52.793751  983133 start.go:83] releasing machines lock for "pause-267284", held for 7.435703987s
	I0731 12:28:52.793819  983133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-267284
	I0731 12:28:52.830706  983133 ssh_runner.go:195] Run: cat /version.json
	I0731 12:28:52.830758  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:52.830989  983133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:28:52.831050  983133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-267284
	I0731 12:28:52.892188  983133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36032 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/pause-267284/id_rsa Username:docker}
	I0731 12:28:52.894625  983133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36032 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/pause-267284/id_rsa Username:docker}
	I0731 12:28:53.156252  983133 ssh_runner.go:195] Run: systemctl --version
	I0731 12:28:53.163387  983133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:28:53.336970  983133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:28:53.343777  983133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:28:53.356414  983133 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:28:53.356556  983133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:28:53.368340  983133 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 12:28:53.368413  983133 start.go:466] detecting cgroup driver to use...
	I0731 12:28:53.368458  983133 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 12:28:53.368543  983133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:28:53.385907  983133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:28:53.401913  983133 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:28:53.402024  983133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:28:53.420055  983133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:28:53.436687  983133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 12:28:53.602115  983133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:28:53.804516  983133 docker.go:212] disabling docker service ...
	I0731 12:28:53.804589  983133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:28:53.828998  983133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:28:53.849972  983133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:28:54.027039  983133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:28:54.272130  983133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:28:54.327140  983133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:28:54.519487  983133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 12:28:54.519557  983133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:28:54.602277  983133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 12:28:54.602359  983133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:28:54.699651  983133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:28:54.770900  983133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:28:54.889258  983133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:28:54.953603  983133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:28:54.991596  983133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:28:55.030168  983133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:28:55.318183  983133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 12:29:05.657562  983133 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.339346822s)
	I0731 12:29:05.657588  983133 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 12:29:05.657643  983133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 12:29:05.662960  983133 start.go:534] Will wait 60s for crictl version
	I0731 12:29:05.663025  983133 ssh_runner.go:195] Run: which crictl
	I0731 12:29:05.669089  983133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:29:05.729573  983133 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 12:29:05.729664  983133 ssh_runner.go:195] Run: crio --version
	I0731 12:29:05.787577  983133 ssh_runner.go:195] Run: crio --version
	I0731 12:29:05.851240  983133 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 12:29:05.853122  983133 cli_runner.go:164] Run: docker network inspect pause-267284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:29:05.872530  983133 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0731 12:29:05.878353  983133 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:29:05.878427  983133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 12:29:05.946249  983133 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 12:29:05.946268  983133 crio.go:415] Images already preloaded, skipping extraction
	I0731 12:29:05.946334  983133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 12:29:06.014910  983133 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 12:29:06.014993  983133 cache_images.go:84] Images are preloaded, skipping loading
	I0731 12:29:06.015120  983133 ssh_runner.go:195] Run: crio config
	I0731 12:29:06.126521  983133 cni.go:84] Creating CNI manager for ""
	I0731 12:29:06.126589  983133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:29:06.126615  983133 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 12:29:06.126662  983133 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-267284 NodeName:pause-267284 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:29:06.126904  983133 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-267284"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:29:06.127029  983133 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-267284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-267284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 12:29:06.127133  983133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 12:29:06.139769  983133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:29:06.139915  983133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:29:06.152073  983133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0731 12:29:06.174623  983133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:29:06.214636  983133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0731 12:29:06.245592  983133 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0731 12:29:06.251294  983133 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284 for IP: 192.168.76.2
	I0731 12:29:06.251371  983133 certs.go:190] acquiring lock for shared ca certs: {Name:mk762e840a818dea6b5e9edfaa8822eb28411d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:29:06.251549  983133 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key
	I0731 12:29:06.251625  983133 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key
	I0731 12:29:06.251751  983133 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/client.key
	I0731 12:29:06.251837  983133 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/apiserver.key.31bdca25
	I0731 12:29:06.251913  983133 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/proxy-client.key
	I0731 12:29:06.252080  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem (1338 bytes)
	W0731 12:29:06.252153  983133 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550_empty.pem, impossibly tiny 0 bytes
	I0731 12:29:06.252179  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 12:29:06.252246  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:29:06.252304  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:29:06.252368  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem (1679 bytes)
	I0731 12:29:06.252453  983133 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:29:06.253416  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 12:29:06.284279  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:29:06.314963  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:29:06.346415  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/pause-267284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 12:29:06.377139  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:29:06.408416  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:29:06.440189  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:29:06.473090  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:29:06.504528  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:29:06.537243  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/852550.pem --> /usr/share/ca-certificates/852550.pem (1338 bytes)
	I0731 12:29:06.569409  983133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /usr/share/ca-certificates/8525502.pem (1708 bytes)
	I0731 12:29:06.602854  983133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:29:06.626552  983133 ssh_runner.go:195] Run: openssl version
	I0731 12:29:06.634875  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:29:06.648616  983133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:29:06.654547  983133 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 11:48 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:29:06.654668  983133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:29:06.664814  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:29:06.676492  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/852550.pem && ln -fs /usr/share/ca-certificates/852550.pem /etc/ssl/certs/852550.pem"
	I0731 12:29:06.689367  983133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/852550.pem
	I0731 12:29:06.695627  983133 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:54 /usr/share/ca-certificates/852550.pem
	I0731 12:29:06.695770  983133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/852550.pem
	I0731 12:29:06.705774  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/852550.pem /etc/ssl/certs/51391683.0"
	I0731 12:29:06.718708  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8525502.pem && ln -fs /usr/share/ca-certificates/8525502.pem /etc/ssl/certs/8525502.pem"
	I0731 12:29:06.731323  983133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8525502.pem
	I0731 12:29:06.737317  983133 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:54 /usr/share/ca-certificates/8525502.pem
	I0731 12:29:06.737464  983133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8525502.pem
	I0731 12:29:06.747410  983133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8525502.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:29:06.759017  983133 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 12:29:06.764585  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:29:06.774244  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:29:06.786466  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:29:06.796530  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:29:06.806117  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:29:06.815767  983133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:29:06.825305  983133 kubeadm.go:404] StartCluster: {Name:pause-267284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-267284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:29:06.825474  983133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 12:29:06.825564  983133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 12:29:06.899208  983133 cri.go:89] found id: "1c0a7486e608f7b790b4999d77a0c25c4b8076e51257be94ed971399e71b1db1"
	I0731 12:29:06.899312  983133 cri.go:89] found id: "4158889d359bff3498f692d0bee4770d0525d9bb09a6ec5516f0d47e7262038f"
	I0731 12:29:06.899335  983133 cri.go:89] found id: "942ea5c65e2504aa623ea11a7f94df96dd4ff81a9805adc34d9ed32fdf7b5b13"
	I0731 12:29:06.899352  983133 cri.go:89] found id: "966fcf83fa1b3f2df7e1f11ee73cf7853d74cff7ef5f8c0cbd1dd0646493eaa0"
	I0731 12:29:06.899383  983133 cri.go:89] found id: "d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed"
	I0731 12:29:06.899406  983133 cri.go:89] found id: "74ae9c2ecc8dc4bf134f91af61ff4ec9db0e63f8935348a4ef488b7464017f2c"
	I0731 12:29:06.899422  983133 cri.go:89] found id: "373973d9678f5a505c8e98a0e030c0ef9f15d1ddf0a52c1a34a60ade3775c82a"
	I0731 12:29:06.899442  983133 cri.go:89] found id: "457558821ba8b6422ab65c71240b616245289bd44aaf6202d0c8d1a2c3d8a8ba"
	I0731 12:29:06.899472  983133 cri.go:89] found id: "95b3b3b886706fab1cf3eb5668222a60201a3bd4b2600617e9e5a221afa13dc1"
	I0731 12:29:06.899499  983133 cri.go:89] found id: "a918934b5413b7a1d28cede2fe63ce8587090c1ab3cfac8242a02bd5f9f6aecd"
	I0731 12:29:06.899517  983133 cri.go:89] found id: "fbf6e54074902829aaeda8038f709bc6be1a92ddd5ecfc4f968300b4af102db5"
	I0731 12:29:06.899548  983133 cri.go:89] found id: "f82333661943fb4c1d070d723f071d724f7b07c3202c8421c83a58bc5ad3b309"
	I0731 12:29:06.899569  983133 cri.go:89] found id: "e0662b00c1daad54149809d682774e5a321767fd6a928c6e31d57ac97e839cdc"
	I0731 12:29:06.899587  983133 cri.go:89] found id: "061c4718fa9b443801b627d0c612f7f3f97239fbff30d7feb5bb8d14f2ef53a9"
	I0731 12:29:06.899608  983133 cri.go:89] found id: "fc02dc022dfb25832f95f29617d755922044087b541f205af43795dcb537dbd0"
	I0731 12:29:06.899639  983133 cri.go:89] found id: "84cc3ee0202a99178570e201a06e6e0c604355ae858b03552b3ade4daae58cb8"
	I0731 12:29:06.899656  983133 cri.go:89] found id: ""
	I0731 12:29:06.899741  983133 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-267284
helpers_test.go:235: (dbg) docker inspect pause-267284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2",
	        "Created": "2023-07-31T12:27:27.033288472Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 978314,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T12:27:27.358664446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/hosts",
	        "LogPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2-json.log",
	        "Name": "/pause-267284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-267284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-267284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b-init/diff:/var/lib/docker/overlay2/ea390dfb8f8baaae26b2c19880bf5069405274e04629daebd3f048abbe32d27b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-267284",
	                "Source": "/var/lib/docker/volumes/pause-267284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-267284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-267284",
	                "name.minikube.sigs.k8s.io": "pause-267284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ccc4fc99770bd8324461b1632f9783a20cf126f5b047d0390d87c08cf11ad4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36032"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36029"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5ccc4fc99770",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-267284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0fbcfdcfa212",
	                        "pause-267284"
	                    ],
	                    "NetworkID": "86c2f6d43d25d2f92135e5d2cdac39958209d5a64f47029aae0a748de3352898",
	                    "EndpointID": "352c00d8ffc3f1d469e17fb48d6c4f35f6ed6e7c6915b1e1066d7942f12734d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-267284 -n pause-267284
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-267284 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-267284 logs -n 25: (2.465635369s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:21 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:21 UTC | 31 Jul 23 12:22 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-522344 sudo    | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-522344 sudo    | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:24 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-141478      | missing-upgrade-141478    | jenkins | v1.31.1 | 31 Jul 23 12:23 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:24 UTC |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:28 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-141478      | missing-upgrade-141478    | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:24 UTC |
	| start   | -p stopped-upgrade-379049      | stopped-upgrade-379049    | jenkins | v1.31.1 | 31 Jul 23 12:25 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-379049      | stopped-upgrade-379049    | jenkins | v1.31.1 | 31 Jul 23 12:26 UTC | 31 Jul 23 12:26 UTC |
	| start   | -p running-upgrade-307223      | running-upgrade-307223    | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-307223      | running-upgrade-307223    | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC | 31 Jul 23 12:27 UTC |
	| start   | -p pause-267284 --memory=2048  | pause-267284              | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC | 31 Jul 23 12:28 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-267284                | pause-267284              | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC | 31 Jul 23 12:29 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC | 31 Jul 23 12:29 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:29 UTC | 31 Jul 23 12:29 UTC |
	| start   | -p force-systemd-flag-198804   | force-systemd-flag-198804 | jenkins | v1.31.1 | 31 Jul 23 12:29 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 12:29:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:29:23.513189  986636 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:29:23.513433  986636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:23.513447  986636 out.go:309] Setting ErrFile to fd 2...
	I0731 12:29:23.513454  986636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:23.513867  986636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:29:23.514667  986636 out.go:303] Setting JSON to false
	I0731 12:29:23.516039  986636 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72711,"bootTime":1690733853,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:29:23.516190  986636 start.go:138] virtualization:  
	I0731 12:29:23.519105  986636 out.go:177] * [force-systemd-flag-198804] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:29:23.521428  986636 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:29:23.523215  986636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:29:23.521529  986636 notify.go:220] Checking for updates...
	I0731 12:29:23.525083  986636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:29:23.526777  986636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:29:23.528581  986636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:29:23.530186  986636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:29:23.532389  986636 config.go:182] Loaded profile config "pause-267284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:29:23.532498  986636 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:29:23.562256  986636 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:29:23.562366  986636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:29:23.645125  986636 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:29:23.633924698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:29:23.645230  986636 docker.go:294] overlay module found
	I0731 12:29:23.648332  986636 out.go:177] * Using the docker driver based on user configuration
	I0731 12:29:23.650010  986636 start.go:298] selected driver: docker
	I0731 12:29:23.650056  986636 start.go:898] validating driver "docker" against <nil>
	I0731 12:29:23.650085  986636 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:29:23.650825  986636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:29:23.725830  986636 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:29:23.714795231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:29:23.726052  986636 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 12:29:23.726374  986636 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:29:23.728288  986636 out.go:177] * Using Docker driver with root privileges
	I0731 12:29:23.730298  986636 cni.go:84] Creating CNI manager for ""
	I0731 12:29:23.730333  986636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:29:23.730345  986636 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:29:23.730362  986636 start_flags.go:319] config:
	{Name:force-systemd-flag-198804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-198804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:29:23.732880  986636 out.go:177] * Starting control plane node force-systemd-flag-198804 in cluster force-systemd-flag-198804
	I0731 12:29:23.734919  986636 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:29:23.736614  986636 out.go:177] * Pulling base image ...
	I0731 12:29:23.738558  986636 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:29:23.738616  986636 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 12:29:23.738628  986636 cache.go:57] Caching tarball of preloaded images
	I0731 12:29:23.738639  986636 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 12:29:23.738723  986636 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 12:29:23.738733  986636 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 12:29:23.738873  986636 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/force-systemd-flag-198804/config.json ...
	I0731 12:29:23.738902  986636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/force-systemd-flag-198804/config.json: {Name:mkefae31feaaf0e88d13fbbc1a88d45510c23e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:29:23.758788  986636 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 12:29:23.758810  986636 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 12:29:23.758849  986636 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:29:23.758891  986636 start.go:365] acquiring machines lock for force-systemd-flag-198804: {Name:mkbdd0df422778788a159f2f40896732c3925362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:29:23.759013  986636 start.go:369] acquired machines lock for "force-systemd-flag-198804" in 104.575µs
	I0731 12:29:23.759041  986636 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-198804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-198804 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 12:29:23.759124  986636 start.go:125] createHost starting for "" (driver="docker")
	I0731 12:29:20.431668  983133 pod_ready.go:102] pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:21.942125  983133 pod_ready.go:92] pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:21.942146  983133 pod_ready.go:81] duration metric: took 3.548676586s waiting for pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:21.942158  983133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:23.973064  983133 pod_ready.go:102] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:23.761025  986636 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 12:29:23.761288  986636 start.go:159] libmachine.API.Create for "force-systemd-flag-198804" (driver="docker")
	I0731 12:29:23.761324  986636 client.go:168] LocalClient.Create starting
	I0731 12:29:23.761386  986636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 12:29:23.761432  986636 main.go:141] libmachine: Decoding PEM data...
	I0731 12:29:23.761450  986636 main.go:141] libmachine: Parsing certificate...
	I0731 12:29:23.761507  986636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 12:29:23.761528  986636 main.go:141] libmachine: Decoding PEM data...
	I0731 12:29:23.761702  986636 main.go:141] libmachine: Parsing certificate...
	I0731 12:29:23.762133  986636 cli_runner.go:164] Run: docker network inspect force-systemd-flag-198804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 12:29:23.779815  986636 cli_runner.go:211] docker network inspect force-systemd-flag-198804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 12:29:23.779910  986636 network_create.go:281] running [docker network inspect force-systemd-flag-198804] to gather additional debugging logs...
	I0731 12:29:23.779926  986636 cli_runner.go:164] Run: docker network inspect force-systemd-flag-198804
	W0731 12:29:23.798763  986636 cli_runner.go:211] docker network inspect force-systemd-flag-198804 returned with exit code 1
	I0731 12:29:23.798790  986636 network_create.go:284] error running [docker network inspect force-systemd-flag-198804]: docker network inspect force-systemd-flag-198804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-198804 not found
	I0731 12:29:23.798804  986636 network_create.go:286] output of [docker network inspect force-systemd-flag-198804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-198804 not found
	
	** /stderr **
	I0731 12:29:23.798880  986636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:29:23.818641  986636 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-613e9d6d9aa3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:95:dc:f7:db} reservation:<nil>}
	I0731 12:29:23.819088  986636 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3cd2f3d254c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:66:fd:3b:71} reservation:<nil>}
	I0731 12:29:23.819698  986636 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40012aeba0}
	I0731 12:29:23.819722  986636 network_create.go:123] attempt to create docker network force-systemd-flag-198804 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0731 12:29:23.819779  986636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-198804 force-systemd-flag-198804
	I0731 12:29:23.896096  986636 network_create.go:107] docker network force-systemd-flag-198804 192.168.67.0/24 created
	I0731 12:29:23.896208  986636 kic.go:117] calculated static IP "192.168.67.2" for the "force-systemd-flag-198804" container
	I0731 12:29:23.896299  986636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 12:29:23.913291  986636 cli_runner.go:164] Run: docker volume create force-systemd-flag-198804 --label name.minikube.sigs.k8s.io=force-systemd-flag-198804 --label created_by.minikube.sigs.k8s.io=true
	I0731 12:29:23.932730  986636 oci.go:103] Successfully created a docker volume force-systemd-flag-198804
	I0731 12:29:23.932820  986636 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-198804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-198804 --entrypoint /usr/bin/test -v force-systemd-flag-198804:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 12:29:24.567150  986636 oci.go:107] Successfully prepared a docker volume force-systemd-flag-198804
	I0731 12:29:24.567213  986636 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:29:24.567234  986636 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 12:29:24.567335  986636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-198804:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 12:29:25.993588  983133 pod_ready.go:102] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:26.973195  983133 pod_ready.go:92] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:26.973216  983133 pod_ready.go:81] duration metric: took 5.031050247s waiting for pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:26.973227  983133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.507134  983133 pod_ready.go:92] pod "etcd-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:27.507189  983133 pod_ready.go:81] duration metric: took 533.953385ms waiting for pod "etcd-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.507205  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.521140  983133 pod_ready.go:92] pod "kube-apiserver-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:27.521163  983133 pod_ready.go:81] duration metric: took 13.95078ms waiting for pod "kube-apiserver-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.521176  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.302228  983133 pod_ready.go:92] pod "kube-controller-manager-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.302325  983133 pod_ready.go:81] duration metric: took 781.139385ms waiting for pod "kube-controller-manager-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.302353  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrkr7" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.581476  983133 pod_ready.go:92] pod "kube-proxy-qrkr7" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.581549  983133 pod_ready.go:81] duration metric: took 279.18038ms waiting for pod "kube-proxy-qrkr7" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.581575  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.975156  983133 pod_ready.go:92] pod "kube-scheduler-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.975177  983133 pod_ready.go:81] duration metric: took 393.582062ms waiting for pod "kube-scheduler-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.975186  983133 pod_ready.go:38] duration metric: took 10.591771305s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:29:28.975201  983133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:29:28.975253  983133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:29:28.991377  983133 api_server.go:72] duration metric: took 10.785691221s to wait for apiserver process to appear ...
	I0731 12:29:28.991398  983133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:29:28.991414  983133 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0731 12:29:29.012418  983133 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0731 12:29:29.013854  983133 api_server.go:141] control plane version: v1.27.3
	I0731 12:29:29.013877  983133 api_server.go:131] duration metric: took 22.473027ms to wait for apiserver health ...
	I0731 12:29:29.013891  983133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 12:29:29.175246  983133 system_pods.go:59] 8 kube-system pods found
	I0731 12:29:29.177821  983133 system_pods.go:61] "coredns-5d78c9869d-nc82f" [de379dd6-9f7f-4c57-9e10-53a12f65acde] Running
	I0731 12:29:29.177894  983133 system_pods.go:61] "coredns-5d78c9869d-r5tqt" [eb27a8cb-17a9-44b8-808a-3947caa530e1] Running
	I0731 12:29:29.177923  983133 system_pods.go:61] "etcd-pause-267284" [736c805e-5d47-4b90-a4b9-6384c9787602] Running
	I0731 12:29:29.177946  983133 system_pods.go:61] "kindnet-bfc8h" [512025bc-3701-41f6-8fc5-cf18c81efbe7] Running
	I0731 12:29:29.177969  983133 system_pods.go:61] "kube-apiserver-pause-267284" [747bcb76-0a5f-4a1f-9bba-2cca8b17b588] Running
	I0731 12:29:29.178010  983133 system_pods.go:61] "kube-controller-manager-pause-267284" [980304e4-7d40-49fa-96eb-31bc873c1e9b] Running
	I0731 12:29:29.178035  983133 system_pods.go:61] "kube-proxy-qrkr7" [04de0c34-3dbf-4f29-8394-6effa170a95c] Running
	I0731 12:29:29.178055  983133 system_pods.go:61] "kube-scheduler-pause-267284" [034a85a8-bdac-4cd2-901c-4f75c6595971] Running
	I0731 12:29:29.178077  983133 system_pods.go:74] duration metric: took 164.179166ms to wait for pod list to return data ...
	I0731 12:29:29.178110  983133 default_sa.go:34] waiting for default service account to be created ...
	I0731 12:29:29.370470  983133 default_sa.go:45] found service account: "default"
	I0731 12:29:29.370544  983133 default_sa.go:55] duration metric: took 192.410818ms for default service account to be created ...
	I0731 12:29:29.370558  983133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 12:29:29.574842  983133 system_pods.go:86] 8 kube-system pods found
	I0731 12:29:29.574936  983133 system_pods.go:89] "coredns-5d78c9869d-nc82f" [de379dd6-9f7f-4c57-9e10-53a12f65acde] Running
	I0731 12:29:29.574959  983133 system_pods.go:89] "coredns-5d78c9869d-r5tqt" [eb27a8cb-17a9-44b8-808a-3947caa530e1] Running
	I0731 12:29:29.574981  983133 system_pods.go:89] "etcd-pause-267284" [736c805e-5d47-4b90-a4b9-6384c9787602] Running
	I0731 12:29:29.575014  983133 system_pods.go:89] "kindnet-bfc8h" [512025bc-3701-41f6-8fc5-cf18c81efbe7] Running
	I0731 12:29:29.575032  983133 system_pods.go:89] "kube-apiserver-pause-267284" [747bcb76-0a5f-4a1f-9bba-2cca8b17b588] Running
	I0731 12:29:29.575053  983133 system_pods.go:89] "kube-controller-manager-pause-267284" [980304e4-7d40-49fa-96eb-31bc873c1e9b] Running
	I0731 12:29:29.575073  983133 system_pods.go:89] "kube-proxy-qrkr7" [04de0c34-3dbf-4f29-8394-6effa170a95c] Running
	I0731 12:29:29.575105  983133 system_pods.go:89] "kube-scheduler-pause-267284" [034a85a8-bdac-4cd2-901c-4f75c6595971] Running
	I0731 12:29:29.575130  983133 system_pods.go:126] duration metric: took 204.566391ms to wait for k8s-apps to be running ...
	I0731 12:29:29.575150  983133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 12:29:29.575240  983133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:29:29.595314  983133 system_svc.go:56] duration metric: took 20.153366ms WaitForService to wait for kubelet.
	I0731 12:29:29.595339  983133 kubeadm.go:581] duration metric: took 11.389658654s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 12:29:29.595368  983133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 12:29:29.772081  983133 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:29:29.772127  983133 node_conditions.go:123] node cpu capacity is 2
	I0731 12:29:29.772141  983133 node_conditions.go:105] duration metric: took 176.768462ms to run NodePressure ...
	I0731 12:29:29.772153  983133 start.go:228] waiting for startup goroutines ...
	I0731 12:29:29.772161  983133 start.go:233] waiting for cluster config update ...
	I0731 12:29:29.772168  983133 start.go:242] writing updated cluster config ...
	I0731 12:29:29.772560  983133 ssh_runner.go:195] Run: rm -f paused
	I0731 12:29:29.928399  983133 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 12:29:29.930447  983133 out.go:177] * Done! kubectl is now configured to use "pause-267284" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 12:29:08 pause-267284 crio[2709]: time="2023-07-31 12:29:08.357153837Z" level=info msg="Removed container f82333661943fb4c1d070d723f071d724f7b07c3202c8421c83a58bc5ad3b309: kube-system/kindnet-bfc8h/kindnet-cni" id=0e923aa6-0970-459c-ac91-e8e1708dee9a name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 12:29:08 pause-267284 crio[2709]: time="2023-07-31 12:29:08.654842586Z" level=info msg="Started container" PID=3135 containerID=e94cae3497e126b8f4a7aab6aee3564774d49e63b191f5bb61cffe0dc590d499 description=kube-system/kube-proxy-qrkr7/kube-proxy id=8ab63a06-144c-4bcf-869a-24eb4489f8b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14acedab36a5addbdec19c6f94b9585fa35f9c27e470dac20c39b4ca923416b4
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.044732422Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080716206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080751480Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080769417Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105748855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105788034Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105806028Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151583608Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151625003Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151642578Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.201527083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.201560774Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.905816409Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=08fef7a5-82f4-46c1-a740-c09445a59853 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.906058025Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=08fef7a5-82f4-46c1-a740-c09445a59853 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.907065458Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=da219787-a4d9-4afe-8c0b-8ffa8f482f82 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.907275879Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da219787-a4d9-4afe-8c0b-8ffa8f482f82 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.908152587Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-r5tqt/coredns" id=1e6731bf-3672-48b4-92b2-7632c7cc096b name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.908260665Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.929194134Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a45798685793cf595a4988232d05749cb42db9ebc6742ef31ed89a4750ceaa04/merged/etc/passwd: no such file or directory"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.929247713Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a45798685793cf595a4988232d05749cb42db9ebc6742ef31ed89a4750ceaa04/merged/etc/group: no such file or directory"
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.049838428Z" level=info msg="Created container 36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1: kube-system/coredns-5d78c9869d-r5tqt/coredns" id=1e6731bf-3672-48b4-92b2-7632c7cc096b name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.050913987Z" level=info msg="Starting container: 36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1" id=cc559091-b89e-4b9a-a6fa-7b9f0465bd05 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.081313432Z" level=info msg="Started container" PID=3553 containerID=36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1 description=kube-system/coredns-5d78c9869d-r5tqt/coredns id=cc559091-b89e-4b9a-a6fa-7b9f0465bd05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2e0cb8b293b9ee8c31a48e5997feec1256856929e1c68dde3418a3e8f10654c
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36d1a9376bdac       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   6 seconds ago       Running             coredns                   2                   d2e0cb8b293b9       coredns-5d78c9869d-r5tqt
	3cc1dd482ad59       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   24 seconds ago      Running             kindnet-cni               2                   d7b484e183e1f       kindnet-bfc8h
	e9a0cf1447820       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   24 seconds ago      Running             etcd                      2                   acfb133348ebf       etcd-pause-267284
	491b004b30ba9       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8   24 seconds ago      Running             kube-controller-manager   2                   e6a4c176ffe0a       kube-controller-manager-pause-267284
	e94cae3497e12       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a   24 seconds ago      Running             kube-proxy                2                   14acedab36a5a       kube-proxy-qrkr7
	9d384afd8c673       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473   24 seconds ago      Running             kube-apiserver            2                   e66928e22a651       kube-apiserver-pause-267284
	683efe5909502       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540   24 seconds ago      Running             kube-scheduler            2                   cedee862ab07b       kube-scheduler-pause-267284
	318a1dbf18ca7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   24 seconds ago      Running             coredns                   2                   419101fcd7188       coredns-5d78c9869d-nc82f
	1c0a7486e608f       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473   37 seconds ago      Exited              kube-apiserver            1                   e66928e22a651       kube-apiserver-pause-267284
	4158889d359bf       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a   37 seconds ago      Exited              kube-proxy                1                   14acedab36a5a       kube-proxy-qrkr7
	942ea5c65e250       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   37 seconds ago      Exited              etcd                      1                   acfb133348ebf       etcd-pause-267284
	966fcf83fa1b3       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   37 seconds ago      Exited              kindnet-cni               1                   d7b484e183e1f       kindnet-bfc8h
	d9b25e90709fd       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   37 seconds ago      Exited              coredns                   1                   d2e0cb8b293b9       coredns-5d78c9869d-r5tqt
	74ae9c2ecc8dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   37 seconds ago      Exited              coredns                   1                   419101fcd7188       coredns-5d78c9869d-nc82f
	373973d9678f5       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8   37 seconds ago      Exited              kube-controller-manager   1                   e6a4c176ffe0a       kube-controller-manager-pause-267284
	457558821ba8b       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540   37 seconds ago      Exited              kube-scheduler            1                   cedee862ab07b       kube-scheduler-pause-267284
	
	* 
	* ==> coredns [318a1dbf18ca7bf49c82d388ca78a35c60601b200318785f27ff4beff848bf3e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41995 - 35518 "HINFO IN 8787378084303816787.8563877881973228632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032476457s
	
	* 
	* ==> coredns [36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57978 - 56709 "HINFO IN 4361633289896798266.2655127048618595159. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015186315s
	
	* 
	* ==> coredns [74ae9c2ecc8dc4bf134f91af61ff4ec9db0e63f8935348a4ef488b7464017f2c] <==
	* 
	* 
	* ==> coredns [d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46745 - 2319 "HINFO IN 7700068273409356951.1249373602562824233. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023546521s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-267284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-267284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=pause-267284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T12_27_56_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 12:27:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-267284
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 12:29:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:28:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-267284
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f85dc844f7342719283693062616751
	  System UUID:                9a98f658-0a10-4566-98fd-fc96d2cf9eff
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-nc82f                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     84s
	  kube-system                 coredns-5d78c9869d-r5tqt                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     84s
	  kube-system                 etcd-pause-267284                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kindnet-bfc8h                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      84s
	  kube-system                 kube-apiserver-pause-267284             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-pause-267284    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-qrkr7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-pause-267284             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 82s                  kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   Starting                 107s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-267284 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-267284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node pause-267284 status is now: NodeHasSufficientPID
	  Normal   Starting                 97s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  97s                  kubelet          Node pause-267284 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s                  kubelet          Node pause-267284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s                  kubelet          Node pause-267284 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           84s                  node-controller  Node pause-267284 event: Registered Node pause-267284 in Controller
	  Normal   NodeReady                52s                  kubelet          Node pause-267284 status is now: NodeReady
	  Warning  ContainerGCFailed        37s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4s                   node-controller  Node pause-267284 event: Registered Node pause-267284 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001037] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000001a6bd468
	[  +0.001024] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +0.005951] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=0000000040ec07b0
	[  +0.001100] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000739] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000bcbfd487
	[  +0.001134] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +2.785467] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000006d9d7fe3
	[  +0.001098] FS-Cache: O-key=[8] 'ebe1c90000000000'
	[  +0.000685] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=0000000073926d86
	[  +0.001020] FS-Cache: N-key=[8] 'ebe1c90000000000'
	[  +0.282652] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000008660d6a7
	[  +0.001083] FS-Cache: O-key=[8] 'f4e1c90000000000'
	[  +0.000746] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000007f9efda2
	[  +0.001104] FS-Cache: N-key=[8] 'f4e1c90000000000'
	
	* 
	* ==> etcd [942ea5c65e2504aa623ea11a7f94df96dd4ff81a9805adc34d9ed32fdf7b5b13] <==
	* 
	* 
	* ==> etcd [e9a0cf1447820d06fab34678c8119dda8944ae2b19d40f520904b6e8ec63a418] <==
	* {"level":"info","ts":"2023-07-31T12:29:08.714Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T12:29:08.714Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T12:29:08.715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-07-31T12:29:08.717Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-07-31T12:29:08.718Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:29:08.718Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T12:29:08.734Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-31T12:29:08.734Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.407Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-267284 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T12:29:10.407Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T12:29:10.410Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  12:29:32 up 20:11,  0 users,  load average: 4.15, 3.02, 2.55
	Linux pause-267284 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [3cc1dd482ad59c3a122ac141b717f2e8ae794bd87a485ff0bafed9eb6917a650] <==
	* I0731 12:29:08.379482       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 12:29:08.380454       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0731 12:29:08.380679       1 main.go:116] setting mtu 1500 for CNI 
	I0731 12:29:08.380734       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 12:29:08.380783       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 12:29:16.027171       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0731 12:29:16.044404       1 main.go:227] handling current node
	I0731 12:29:26.068330       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0731 12:29:26.068619       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [966fcf83fa1b3f2df7e1f11ee73cf7853d74cff7ef5f8c0cbd1dd0646493eaa0] <==
	* I0731 12:28:54.829181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 12:28:54.830944       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0731 12:28:54.831231       1 main.go:116] setting mtu 1500 for CNI 
	I0731 12:28:54.832163       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 12:28:54.832253       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [1c0a7486e608f7b790b4999d77a0c25c4b8076e51257be94ed971399e71b1db1] <==
	* 
	* 
	* ==> kube-apiserver [9d384afd8c673329ffbe1ef39275b26b78c95fa1a365c37d9b927126c6e6c673] <==
	* I0731 12:29:15.688810       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0731 12:29:15.688834       1 aggregator.go:150] waiting for initial CRD sync...
	I0731 12:29:15.688848       1 controller.go:83] Starting OpenAPI AggregationController
	I0731 12:29:15.689596       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 12:29:15.879959       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0731 12:29:15.879989       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0731 12:29:15.958428       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 12:29:15.969192       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0731 12:29:15.969221       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0731 12:29:15.969273       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0731 12:29:15.969778       1 shared_informer.go:318] Caches are synced for configmaps
	I0731 12:29:15.970223       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0731 12:29:15.970761       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 12:29:16.000614       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0731 12:29:16.000739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 12:29:16.015059       1 aggregator.go:152] initial CRD sync complete...
	I0731 12:29:16.028223       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 12:29:16.028308       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 12:29:16.028346       1 cache.go:39] Caches are synced for autoregister controller
	I0731 12:29:16.029727       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	E0731 12:29:16.030088       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 12:29:16.694705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 12:29:28.265680       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0731 12:29:28.580564       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 12:29:28.583895       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [373973d9678f5a505c8e98a0e030c0ef9f15d1ddf0a52c1a34a60ade3775c82a] <==
	* 
	* 
	* ==> kube-controller-manager [491b004b30ba95ba4f5623a50134925e7724bdbd0882dea6661b7c5df1841838] <==
	* I0731 12:29:28.249229       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0731 12:29:28.253137       1 shared_informer.go:318] Caches are synced for ephemeral
	I0731 12:29:28.253222       1 shared_informer.go:318] Caches are synced for TTL
	I0731 12:29:28.254413       1 shared_informer.go:318] Caches are synced for disruption
	I0731 12:29:28.256622       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0731 12:29:28.263852       1 shared_informer.go:318] Caches are synced for attach detach
	I0731 12:29:28.276418       1 shared_informer.go:318] Caches are synced for PV protection
	I0731 12:29:28.276732       1 shared_informer.go:318] Caches are synced for crt configmap
	I0731 12:29:28.279123       1 shared_informer.go:318] Caches are synced for GC
	I0731 12:29:28.295252       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0731 12:29:28.317832       1 shared_informer.go:318] Caches are synced for taint
	I0731 12:29:28.317952       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0731 12:29:28.317983       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0731 12:29:28.318025       1 taint_manager.go:211] "Sending events to api server"
	I0731 12:29:28.318053       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-267284"
	I0731 12:29:28.318091       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0731 12:29:28.318238       1 event.go:307] "Event occurred" object="pause-267284" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-267284 event: Registered Node pause-267284 in Controller"
	I0731 12:29:28.362855       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:29:28.388557       1 shared_informer.go:318] Caches are synced for stateful set
	I0731 12:29:28.407449       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:29:28.415211       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-r5tqt"
	I0731 12:29:28.460220       1 shared_informer.go:318] Caches are synced for daemon sets
	I0731 12:29:28.778101       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 12:29:28.778133       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0731 12:29:28.803297       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [4158889d359bff3498f692d0bee4770d0525d9bb09a6ec5516f0d47e7262038f] <==
	* 
	* 
	* ==> kube-proxy [e94cae3497e126b8f4a7aab6aee3564774d49e63b191f5bb61cffe0dc590d499] <==
	* I0731 12:29:16.352334       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0731 12:29:16.352468       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0731 12:29:16.381925       1 server_others.go:554] "Using iptables proxy"
	I0731 12:29:17.653503       1 server_others.go:192] "Using iptables Proxier"
	I0731 12:29:17.653616       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 12:29:17.653652       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 12:29:17.653694       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 12:29:17.653789       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 12:29:17.654418       1 server.go:658] "Version info" version="v1.27.3"
	I0731 12:29:17.654667       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 12:29:17.655507       1 config.go:188] "Starting service config controller"
	I0731 12:29:17.660347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 12:29:17.660450       1 config.go:97] "Starting endpoint slice config controller"
	I0731 12:29:17.660485       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 12:29:17.661150       1 config.go:315] "Starting node config controller"
	I0731 12:29:17.706902       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 12:29:17.707004       1 shared_informer.go:318] Caches are synced for node config
	I0731 12:29:17.760844       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 12:29:17.761042       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [457558821ba8b6422ab65c71240b616245289bd44aaf6202d0c8d1a2c3d8a8ba] <==
	* 
	* 
	* ==> kube-scheduler [683efe59095025b79e7d7d76c9eb66dcc965e41387a7cbadaa416d697f745030] <==
	* I0731 12:29:13.045918       1 serving.go:348] Generated self-signed cert in-memory
	I0731 12:29:17.933601       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0731 12:29:17.933705       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 12:29:17.985754       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 12:29:17.991959       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 12:29:17.992050       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 12:29:18.020220       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 12:29:17.992070       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 12:29:18.025705       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 12:29:17.992082       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 12:29:18.025815       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 12:29:18.122959       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0731 12:29:18.128235       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 12:29:18.128315       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497239    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497387    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497535    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497685    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497842    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503042    1389 status_manager.go:809] "Failed to get status for pod" podUID=512025bc-3701-41f6-8fc5-cf18c81efbe7 pod="kube-system/kindnet-bfc8h" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-bfc8h\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503239    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503402    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503550    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503701    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503851    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.504005    1389 status_manager.go:809] "Failed to get status for pod" podUID=1387b335b2f7944f67e1a6d412fb421d pod="kube-system/kube-controller-manager-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.504209    1389 status_manager.go:809] "Failed to get status for pod" podUID=33611319035893b2a47cdd2db1750141 pod="kube-system/kube-apiserver-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508272    1389 status_manager.go:809] "Failed to get status for pod" podUID=1387b335b2f7944f67e1a6d412fb421d pod="kube-system/kube-controller-manager-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508465    1389 status_manager.go:809] "Failed to get status for pod" podUID=33611319035893b2a47cdd2db1750141 pod="kube-system/kube-apiserver-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508637    1389 status_manager.go:809] "Failed to get status for pod" podUID=512025bc-3701-41f6-8fc5-cf18c81efbe7 pod="kube-system/kindnet-bfc8h" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-bfc8h\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508794    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508951    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509106    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509281    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509441    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:11 pause-267284 kubelet[1389]: I0731 12:29:11.483550    1389 scope.go:115] "RemoveContainer" containerID="d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed"
	Jul 31 12:29:11 pause-267284 kubelet[1389]: E0731 12:29:11.483912    1389 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-r5tqt_kube-system(eb27a8cb-17a9-44b8-808a-3947caa530e1)\"" pod="kube-system/coredns-5d78c9869d-r5tqt" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1
	Jul 31 12:29:16 pause-267284 kubelet[1389]: W0731 12:29:16.231263    1389 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jul 31 12:29:25 pause-267284 kubelet[1389]: I0731 12:29:25.904031    1389 scope.go:115] "RemoveContainer" containerID="d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-267284 -n pause-267284
helpers_test.go:261: (dbg) Run:  kubectl --context pause-267284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-267284
helpers_test.go:235: (dbg) docker inspect pause-267284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2",
	        "Created": "2023-07-31T12:27:27.033288472Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 978314,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T12:27:27.358664446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/hosts",
	        "LogPath": "/var/lib/docker/containers/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2/0fbcfdcfa2122e5e4843f51bc422b40ff2344d5e57bc8510a90756a89629fff2-json.log",
	        "Name": "/pause-267284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-267284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-267284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b-init/diff:/var/lib/docker/overlay2/ea390dfb8f8baaae26b2c19880bf5069405274e04629daebd3f048abbe32d27b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46ac95746f96e15277bae21bf7963954da6f1dcd9cb76ab3404500f5320fc95b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-267284",
	                "Source": "/var/lib/docker/volumes/pause-267284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-267284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-267284",
	                "name.minikube.sigs.k8s.io": "pause-267284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ccc4fc99770bd8324461b1632f9783a20cf126f5b047d0390d87c08cf11ad4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36032"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36028"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36029"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5ccc4fc99770",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-267284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0fbcfdcfa212",
	                        "pause-267284"
	                    ],
	                    "NetworkID": "86c2f6d43d25d2f92135e5d2cdac39958209d5a64f47029aae0a748de3352898",
	                    "EndpointID": "352c00d8ffc3f1d469e17fb48d6c4f35f6ed6e7c6915b1e1066d7942f12734d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-267284 -n pause-267284
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-267284 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-267284 logs -n 25: (2.55597727s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:21 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:21 UTC | 31 Jul 23 12:22 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-522344 sudo    | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-522344 sudo    | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-522344         | NoKubernetes-522344       | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:22 UTC |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:22 UTC | 31 Jul 23 12:24 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-141478      | missing-upgrade-141478    | jenkins | v1.31.1 | 31 Jul 23 12:23 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:24 UTC |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:28 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-141478      | missing-upgrade-141478    | jenkins | v1.31.1 | 31 Jul 23 12:24 UTC | 31 Jul 23 12:24 UTC |
	| start   | -p stopped-upgrade-379049      | stopped-upgrade-379049    | jenkins | v1.31.1 | 31 Jul 23 12:25 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-379049      | stopped-upgrade-379049    | jenkins | v1.31.1 | 31 Jul 23 12:26 UTC | 31 Jul 23 12:26 UTC |
	| start   | -p running-upgrade-307223      | running-upgrade-307223    | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-307223      | running-upgrade-307223    | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC | 31 Jul 23 12:27 UTC |
	| start   | -p pause-267284 --memory=2048  | pause-267284              | jenkins | v1.31.1 | 31 Jul 23 12:27 UTC | 31 Jul 23 12:28 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-267284                | pause-267284              | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC | 31 Jul 23 12:29 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:28 UTC | 31 Jul 23 12:29 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-047034   | kubernetes-upgrade-047034 | jenkins | v1.31.1 | 31 Jul 23 12:29 UTC | 31 Jul 23 12:29 UTC |
	| start   | -p force-systemd-flag-198804   | force-systemd-flag-198804 | jenkins | v1.31.1 | 31 Jul 23 12:29 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 12:29:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:29:23.513189  986636 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:29:23.513433  986636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:23.513447  986636 out.go:309] Setting ErrFile to fd 2...
	I0731 12:29:23.513454  986636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:23.513867  986636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:29:23.514667  986636 out.go:303] Setting JSON to false
	I0731 12:29:23.516039  986636 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72711,"bootTime":1690733853,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:29:23.516190  986636 start.go:138] virtualization:  
	I0731 12:29:23.519105  986636 out.go:177] * [force-systemd-flag-198804] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:29:23.521428  986636 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:29:23.523215  986636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:29:23.521529  986636 notify.go:220] Checking for updates...
	I0731 12:29:23.525083  986636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:29:23.526777  986636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:29:23.528581  986636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:29:23.530186  986636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:29:23.532389  986636 config.go:182] Loaded profile config "pause-267284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:29:23.532498  986636 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:29:23.562256  986636 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:29:23.562366  986636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:29:23.645125  986636 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:29:23.633924698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:29:23.645230  986636 docker.go:294] overlay module found
	I0731 12:29:23.648332  986636 out.go:177] * Using the docker driver based on user configuration
	I0731 12:29:23.650010  986636 start.go:298] selected driver: docker
	I0731 12:29:23.650056  986636 start.go:898] validating driver "docker" against <nil>
	I0731 12:29:23.650085  986636 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:29:23.650825  986636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:29:23.725830  986636 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:29:23.714795231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:29:23.726052  986636 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 12:29:23.726374  986636 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:29:23.728288  986636 out.go:177] * Using Docker driver with root privileges
	I0731 12:29:23.730298  986636 cni.go:84] Creating CNI manager for ""
	I0731 12:29:23.730333  986636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 12:29:23.730345  986636 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:29:23.730362  986636 start_flags.go:319] config:
	{Name:force-systemd-flag-198804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-198804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 12:29:23.732880  986636 out.go:177] * Starting control plane node force-systemd-flag-198804 in cluster force-systemd-flag-198804
	I0731 12:29:23.734919  986636 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 12:29:23.736614  986636 out.go:177] * Pulling base image ...
	I0731 12:29:23.738558  986636 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:29:23.738616  986636 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 12:29:23.738628  986636 cache.go:57] Caching tarball of preloaded images
	I0731 12:29:23.738639  986636 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 12:29:23.738723  986636 preload.go:174] Found /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0731 12:29:23.738733  986636 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 12:29:23.738873  986636 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/force-systemd-flag-198804/config.json ...
	I0731 12:29:23.738902  986636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/force-systemd-flag-198804/config.json: {Name:mkefae31feaaf0e88d13fbbc1a88d45510c23e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:29:23.758788  986636 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 12:29:23.758810  986636 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 12:29:23.758849  986636 cache.go:195] Successfully downloaded all kic artifacts
	I0731 12:29:23.758891  986636 start.go:365] acquiring machines lock for force-systemd-flag-198804: {Name:mkbdd0df422778788a159f2f40896732c3925362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:29:23.759013  986636 start.go:369] acquired machines lock for "force-systemd-flag-198804" in 104.575µs
	I0731 12:29:23.759041  986636 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-198804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-198804 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 12:29:23.759124  986636 start.go:125] createHost starting for "" (driver="docker")
	I0731 12:29:20.431668  983133 pod_ready.go:102] pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:21.942125  983133 pod_ready.go:92] pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:21.942146  983133 pod_ready.go:81] duration metric: took 3.548676586s waiting for pod "coredns-5d78c9869d-nc82f" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:21.942158  983133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:23.973064  983133 pod_ready.go:102] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:23.761025  986636 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 12:29:23.761288  986636 start.go:159] libmachine.API.Create for "force-systemd-flag-198804" (driver="docker")
	I0731 12:29:23.761324  986636 client.go:168] LocalClient.Create starting
	I0731 12:29:23.761386  986636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem
	I0731 12:29:23.761432  986636 main.go:141] libmachine: Decoding PEM data...
	I0731 12:29:23.761450  986636 main.go:141] libmachine: Parsing certificate...
	I0731 12:29:23.761507  986636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem
	I0731 12:29:23.761528  986636 main.go:141] libmachine: Decoding PEM data...
	I0731 12:29:23.761702  986636 main.go:141] libmachine: Parsing certificate...
	I0731 12:29:23.762133  986636 cli_runner.go:164] Run: docker network inspect force-systemd-flag-198804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 12:29:23.779815  986636 cli_runner.go:211] docker network inspect force-systemd-flag-198804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 12:29:23.779910  986636 network_create.go:281] running [docker network inspect force-systemd-flag-198804] to gather additional debugging logs...
	I0731 12:29:23.779926  986636 cli_runner.go:164] Run: docker network inspect force-systemd-flag-198804
	W0731 12:29:23.798763  986636 cli_runner.go:211] docker network inspect force-systemd-flag-198804 returned with exit code 1
	I0731 12:29:23.798790  986636 network_create.go:284] error running [docker network inspect force-systemd-flag-198804]: docker network inspect force-systemd-flag-198804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-198804 not found
	I0731 12:29:23.798804  986636 network_create.go:286] output of [docker network inspect force-systemd-flag-198804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-198804 not found
	
	** /stderr **
	I0731 12:29:23.798880  986636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 12:29:23.818641  986636 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-613e9d6d9aa3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:95:dc:f7:db} reservation:<nil>}
	I0731 12:29:23.819088  986636 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3cd2f3d254c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:66:fd:3b:71} reservation:<nil>}
	I0731 12:29:23.819698  986636 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40012aeba0}
	I0731 12:29:23.819722  986636 network_create.go:123] attempt to create docker network force-systemd-flag-198804 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0731 12:29:23.819779  986636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-198804 force-systemd-flag-198804
	I0731 12:29:23.896096  986636 network_create.go:107] docker network force-systemd-flag-198804 192.168.67.0/24 created
	I0731 12:29:23.896208  986636 kic.go:117] calculated static IP "192.168.67.2" for the "force-systemd-flag-198804" container
	I0731 12:29:23.896299  986636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 12:29:23.913291  986636 cli_runner.go:164] Run: docker volume create force-systemd-flag-198804 --label name.minikube.sigs.k8s.io=force-systemd-flag-198804 --label created_by.minikube.sigs.k8s.io=true
	I0731 12:29:23.932730  986636 oci.go:103] Successfully created a docker volume force-systemd-flag-198804
	I0731 12:29:23.932820  986636 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-198804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-198804 --entrypoint /usr/bin/test -v force-systemd-flag-198804:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 12:29:24.567150  986636 oci.go:107] Successfully prepared a docker volume force-systemd-flag-198804
	I0731 12:29:24.567213  986636 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 12:29:24.567234  986636 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 12:29:24.567335  986636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-198804:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 12:29:25.993588  983133 pod_ready.go:102] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"False"
	I0731 12:29:26.973195  983133 pod_ready.go:92] pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:26.973216  983133 pod_ready.go:81] duration metric: took 5.031050247s waiting for pod "coredns-5d78c9869d-r5tqt" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:26.973227  983133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.507134  983133 pod_ready.go:92] pod "etcd-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:27.507189  983133 pod_ready.go:81] duration metric: took 533.953385ms waiting for pod "etcd-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.507205  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.521140  983133 pod_ready.go:92] pod "kube-apiserver-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:27.521163  983133 pod_ready.go:81] duration metric: took 13.95078ms waiting for pod "kube-apiserver-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:27.521176  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.302228  983133 pod_ready.go:92] pod "kube-controller-manager-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.302325  983133 pod_ready.go:81] duration metric: took 781.139385ms waiting for pod "kube-controller-manager-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.302353  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrkr7" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.581476  983133 pod_ready.go:92] pod "kube-proxy-qrkr7" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.581549  983133 pod_ready.go:81] duration metric: took 279.18038ms waiting for pod "kube-proxy-qrkr7" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.581575  983133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.975156  983133 pod_ready.go:92] pod "kube-scheduler-pause-267284" in "kube-system" namespace has status "Ready":"True"
	I0731 12:29:28.975177  983133 pod_ready.go:81] duration metric: took 393.582062ms waiting for pod "kube-scheduler-pause-267284" in "kube-system" namespace to be "Ready" ...
	I0731 12:29:28.975186  983133 pod_ready.go:38] duration metric: took 10.591771305s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 12:29:28.975201  983133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:29:28.975253  983133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:29:28.991377  983133 api_server.go:72] duration metric: took 10.785691221s to wait for apiserver process to appear ...
	I0731 12:29:28.991398  983133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:29:28.991414  983133 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0731 12:29:29.012418  983133 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0731 12:29:29.013854  983133 api_server.go:141] control plane version: v1.27.3
	I0731 12:29:29.013877  983133 api_server.go:131] duration metric: took 22.473027ms to wait for apiserver health ...
	I0731 12:29:29.013891  983133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 12:29:29.175246  983133 system_pods.go:59] 8 kube-system pods found
	I0731 12:29:29.177821  983133 system_pods.go:61] "coredns-5d78c9869d-nc82f" [de379dd6-9f7f-4c57-9e10-53a12f65acde] Running
	I0731 12:29:29.177894  983133 system_pods.go:61] "coredns-5d78c9869d-r5tqt" [eb27a8cb-17a9-44b8-808a-3947caa530e1] Running
	I0731 12:29:29.177923  983133 system_pods.go:61] "etcd-pause-267284" [736c805e-5d47-4b90-a4b9-6384c9787602] Running
	I0731 12:29:29.177946  983133 system_pods.go:61] "kindnet-bfc8h" [512025bc-3701-41f6-8fc5-cf18c81efbe7] Running
	I0731 12:29:29.177969  983133 system_pods.go:61] "kube-apiserver-pause-267284" [747bcb76-0a5f-4a1f-9bba-2cca8b17b588] Running
	I0731 12:29:29.178010  983133 system_pods.go:61] "kube-controller-manager-pause-267284" [980304e4-7d40-49fa-96eb-31bc873c1e9b] Running
	I0731 12:29:29.178035  983133 system_pods.go:61] "kube-proxy-qrkr7" [04de0c34-3dbf-4f29-8394-6effa170a95c] Running
	I0731 12:29:29.178055  983133 system_pods.go:61] "kube-scheduler-pause-267284" [034a85a8-bdac-4cd2-901c-4f75c6595971] Running
	I0731 12:29:29.178077  983133 system_pods.go:74] duration metric: took 164.179166ms to wait for pod list to return data ...
	I0731 12:29:29.178110  983133 default_sa.go:34] waiting for default service account to be created ...
	I0731 12:29:29.370470  983133 default_sa.go:45] found service account: "default"
	I0731 12:29:29.370544  983133 default_sa.go:55] duration metric: took 192.410818ms for default service account to be created ...
	I0731 12:29:29.370558  983133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 12:29:29.574842  983133 system_pods.go:86] 8 kube-system pods found
	I0731 12:29:29.574936  983133 system_pods.go:89] "coredns-5d78c9869d-nc82f" [de379dd6-9f7f-4c57-9e10-53a12f65acde] Running
	I0731 12:29:29.574959  983133 system_pods.go:89] "coredns-5d78c9869d-r5tqt" [eb27a8cb-17a9-44b8-808a-3947caa530e1] Running
	I0731 12:29:29.574981  983133 system_pods.go:89] "etcd-pause-267284" [736c805e-5d47-4b90-a4b9-6384c9787602] Running
	I0731 12:29:29.575014  983133 system_pods.go:89] "kindnet-bfc8h" [512025bc-3701-41f6-8fc5-cf18c81efbe7] Running
	I0731 12:29:29.575032  983133 system_pods.go:89] "kube-apiserver-pause-267284" [747bcb76-0a5f-4a1f-9bba-2cca8b17b588] Running
	I0731 12:29:29.575053  983133 system_pods.go:89] "kube-controller-manager-pause-267284" [980304e4-7d40-49fa-96eb-31bc873c1e9b] Running
	I0731 12:29:29.575073  983133 system_pods.go:89] "kube-proxy-qrkr7" [04de0c34-3dbf-4f29-8394-6effa170a95c] Running
	I0731 12:29:29.575105  983133 system_pods.go:89] "kube-scheduler-pause-267284" [034a85a8-bdac-4cd2-901c-4f75c6595971] Running
	I0731 12:29:29.575130  983133 system_pods.go:126] duration metric: took 204.566391ms to wait for k8s-apps to be running ...
	I0731 12:29:29.575150  983133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 12:29:29.575240  983133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:29:29.595314  983133 system_svc.go:56] duration metric: took 20.153366ms WaitForService to wait for kubelet.
	I0731 12:29:29.595339  983133 kubeadm.go:581] duration metric: took 11.389658654s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 12:29:29.595368  983133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 12:29:29.772081  983133 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0731 12:29:29.772127  983133 node_conditions.go:123] node cpu capacity is 2
	I0731 12:29:29.772141  983133 node_conditions.go:105] duration metric: took 176.768462ms to run NodePressure ...
	I0731 12:29:29.772153  983133 start.go:228] waiting for startup goroutines ...
	I0731 12:29:29.772161  983133 start.go:233] waiting for cluster config update ...
	I0731 12:29:29.772168  983133 start.go:242] writing updated cluster config ...
	I0731 12:29:29.772560  983133 ssh_runner.go:195] Run: rm -f paused
	I0731 12:29:29.928399  983133 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 12:29:29.930447  983133 out.go:177] * Done! kubectl is now configured to use "pause-267284" cluster and "default" namespace by default
	I0731 12:29:28.878688  986636 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-198804:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.311299703s)
	I0731 12:29:28.878717  986636 kic.go:199] duration metric: took 4.311479 seconds to extract preloaded images to volume
	W0731 12:29:28.878886  986636 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 12:29:28.879006  986636 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 12:29:28.946473  986636 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-198804 --name force-systemd-flag-198804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-198804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-198804 --network force-systemd-flag-198804 --ip 192.168.67.2 --volume force-systemd-flag-198804:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 12:29:29.330954  986636 cli_runner.go:164] Run: docker container inspect force-systemd-flag-198804 --format={{.State.Running}}
	I0731 12:29:29.359821  986636 cli_runner.go:164] Run: docker container inspect force-systemd-flag-198804 --format={{.State.Status}}
	I0731 12:29:29.390223  986636 cli_runner.go:164] Run: docker exec force-systemd-flag-198804 stat /var/lib/dpkg/alternatives/iptables
	I0731 12:29:29.493377  986636 oci.go:144] the created container "force-systemd-flag-198804" has a running status.
	I0731 12:29:29.493405  986636 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa...
	I0731 12:29:29.945333  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 12:29:29.945430  986636 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 12:29:29.989598  986636 cli_runner.go:164] Run: docker container inspect force-systemd-flag-198804 --format={{.State.Status}}
	I0731 12:29:30.038870  986636 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 12:29:30.039638  986636 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-198804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 12:29:30.388326  986636 cli_runner.go:164] Run: docker container inspect force-systemd-flag-198804 --format={{.State.Status}}
	I0731 12:29:30.474808  986636 machine.go:88] provisioning docker machine ...
	I0731 12:29:30.474840  986636 ubuntu.go:169] provisioning hostname "force-systemd-flag-198804"
	I0731 12:29:30.474928  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:30.548028  986636 main.go:141] libmachine: Using SSH client type: native
	I0731 12:29:30.548623  986636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36037 <nil> <nil>}
	I0731 12:29:30.548640  986636 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-198804 && echo "force-systemd-flag-198804" | sudo tee /etc/hostname
	I0731 12:29:30.854117  986636 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-198804
	
	I0731 12:29:30.854195  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:30.881907  986636 main.go:141] libmachine: Using SSH client type: native
	I0731 12:29:30.882351  986636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36037 <nil> <nil>}
	I0731 12:29:30.882371  986636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-198804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-198804/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-198804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:29:31.075972  986636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:29:31.075999  986636 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-847174/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-847174/.minikube}
	I0731 12:29:31.076038  986636 ubuntu.go:177] setting up certificates
	I0731 12:29:31.076047  986636 provision.go:83] configureAuth start
	I0731 12:29:31.082607  986636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-198804
	I0731 12:29:31.149216  986636 provision.go:138] copyHostCerts
	I0731 12:29:31.149256  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:29:31.149288  986636 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem, removing ...
	I0731 12:29:31.149295  986636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem
	I0731 12:29:31.149366  986636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/ca.pem (1082 bytes)
	I0731 12:29:31.149439  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:29:31.149455  986636 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem, removing ...
	I0731 12:29:31.149460  986636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem
	I0731 12:29:31.149487  986636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/cert.pem (1123 bytes)
	I0731 12:29:31.149537  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:29:31.149554  986636 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem, removing ...
	I0731 12:29:31.149558  986636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem
	I0731 12:29:31.149581  986636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-847174/.minikube/key.pem (1679 bytes)
	I0731 12:29:31.149633  986636 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-198804 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-198804]
	I0731 12:29:31.597371  986636 provision.go:172] copyRemoteCerts
	I0731 12:29:31.597527  986636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:29:31.597598  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:31.619294  986636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36037 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa Username:docker}
	I0731 12:29:31.724649  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 12:29:31.724727  986636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:29:31.757703  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 12:29:31.757813  986636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:29:31.801586  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 12:29:31.801645  986636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 12:29:31.841758  986636 provision.go:86] duration metric: configureAuth took 765.697449ms
	I0731 12:29:31.841829  986636 ubuntu.go:193] setting minikube options for container-runtime
	I0731 12:29:31.842057  986636 config.go:182] Loaded profile config "force-systemd-flag-198804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:29:31.842196  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:31.863278  986636 main.go:141] libmachine: Using SSH client type: native
	I0731 12:29:31.863707  986636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f5b0] 0x3a1f40 <nil>  [] 0s} 127.0.0.1 36037 <nil> <nil>}
	I0731 12:29:31.863725  986636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 12:29:32.196595  986636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 12:29:32.196618  986636 machine.go:91] provisioned docker machine in 1.721790029s
	I0731 12:29:32.196628  986636 client.go:171] LocalClient.Create took 8.435296652s
	I0731 12:29:32.196640  986636 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-198804" took 8.43535385s
	I0731 12:29:32.196651  986636 start.go:300] post-start starting for "force-systemd-flag-198804" (driver="docker")
	I0731 12:29:32.196660  986636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:29:32.196732  986636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:29:32.196780  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:32.222781  986636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36037 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa Username:docker}
	I0731 12:29:32.328545  986636 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:29:32.336464  986636 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 12:29:32.336498  986636 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 12:29:32.336509  986636 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 12:29:32.336515  986636 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 12:29:32.336525  986636 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/addons for local assets ...
	I0731 12:29:32.336594  986636 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-847174/.minikube/files for local assets ...
	I0731 12:29:32.336681  986636 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> 8525502.pem in /etc/ssl/certs
	I0731 12:29:32.336690  986636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem -> /etc/ssl/certs/8525502.pem
	I0731 12:29:32.336796  986636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:29:32.350827  986636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/ssl/certs/8525502.pem --> /etc/ssl/certs/8525502.pem (1708 bytes)
	I0731 12:29:32.389357  986636 start.go:303] post-start completed in 192.692268ms
	I0731 12:29:32.389746  986636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-198804
	I0731 12:29:32.413672  986636 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/force-systemd-flag-198804/config.json ...
	I0731 12:29:32.413959  986636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:29:32.414013  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:32.438526  986636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36037 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa Username:docker}
	I0731 12:29:32.539520  986636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 12:29:32.548172  986636 start.go:128] duration metric: createHost completed in 8.789033385s
	I0731 12:29:32.548195  986636 start.go:83] releasing machines lock for "force-systemd-flag-198804", held for 8.789172831s
	I0731 12:29:32.548276  986636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-198804
	I0731 12:29:32.577636  986636 ssh_runner.go:195] Run: cat /version.json
	I0731 12:29:32.577690  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:32.577964  986636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:29:32.578023  986636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-198804
	I0731 12:29:32.608800  986636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36037 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa Username:docker}
	I0731 12:29:32.620206  986636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36037 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/force-systemd-flag-198804/id_rsa Username:docker}
	I0731 12:29:32.718275  986636 ssh_runner.go:195] Run: systemctl --version
	I0731 12:29:32.870886  986636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 12:29:33.054467  986636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 12:29:33.061162  986636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:29:33.105236  986636 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 12:29:33.105322  986636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 12:29:33.166710  986636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 12:29:33.166736  986636 start.go:466] detecting cgroup driver to use...
	I0731 12:29:33.166749  986636 start.go:470] using "systemd" cgroup driver as enforced via flags
	I0731 12:29:33.166805  986636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:29:33.194953  986636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:29:33.211986  986636 docker.go:196] disabling cri-docker service (if available) ...
	I0731 12:29:33.212091  986636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 12:29:33.229819  986636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 12:29:33.249857  986636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 12:29:33.397397  986636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 12:29:33.551630  986636 docker.go:212] disabling docker service ...
	I0731 12:29:33.551687  986636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 12:29:33.582208  986636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 12:29:33.600754  986636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 12:29:33.741959  986636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 12:29:33.907992  986636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 12:29:33.925639  986636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:29:33.956650  986636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 12:29:33.956716  986636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:29:33.969552  986636 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0731 12:29:33.969620  986636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:29:33.982710  986636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:29:33.996765  986636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 12:29:34.011434  986636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:29:34.024683  986636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:29:34.038214  986636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:29:34.050503  986636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:29:34.192889  986636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 12:29:34.378587  986636 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 12:29:34.378655  986636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 12:29:34.386065  986636 start.go:534] Will wait 60s for crictl version
	I0731 12:29:34.386136  986636 ssh_runner.go:195] Run: which crictl
	I0731 12:29:34.393952  986636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:29:34.449562  986636 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 12:29:34.449653  986636 ssh_runner.go:195] Run: crio --version
	I0731 12:29:34.508097  986636 ssh_runner.go:195] Run: crio --version
	I0731 12:29:34.575456  986636 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	
	* 
	* ==> CRI-O <==
	* Jul 31 12:29:08 pause-267284 crio[2709]: time="2023-07-31 12:29:08.654842586Z" level=info msg="Started container" PID=3135 containerID=e94cae3497e126b8f4a7aab6aee3564774d49e63b191f5bb61cffe0dc590d499 description=kube-system/kube-proxy-qrkr7/kube-proxy id=8ab63a06-144c-4bcf-869a-24eb4489f8b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14acedab36a5addbdec19c6f94b9585fa35f9c27e470dac20c39b4ca923416b4
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.044732422Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080716206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080751480Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.080769417Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105748855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105788034Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.105806028Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151583608Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151625003Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.151642578Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.201527083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 31 12:29:16 pause-267284 crio[2709]: time="2023-07-31 12:29:16.201560774Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.905816409Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=08fef7a5-82f4-46c1-a740-c09445a59853 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.906058025Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=08fef7a5-82f4-46c1-a740-c09445a59853 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.907065458Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=da219787-a4d9-4afe-8c0b-8ffa8f482f82 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.907275879Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da219787-a4d9-4afe-8c0b-8ffa8f482f82 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.908152587Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-r5tqt/coredns" id=1e6731bf-3672-48b4-92b2-7632c7cc096b name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.908260665Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.929194134Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a45798685793cf595a4988232d05749cb42db9ebc6742ef31ed89a4750ceaa04/merged/etc/passwd: no such file or directory"
	Jul 31 12:29:25 pause-267284 crio[2709]: time="2023-07-31 12:29:25.929247713Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a45798685793cf595a4988232d05749cb42db9ebc6742ef31ed89a4750ceaa04/merged/etc/group: no such file or directory"
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.049838428Z" level=info msg="Created container 36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1: kube-system/coredns-5d78c9869d-r5tqt/coredns" id=1e6731bf-3672-48b4-92b2-7632c7cc096b name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.050913987Z" level=info msg="Starting container: 36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1" id=cc559091-b89e-4b9a-a6fa-7b9f0465bd05 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 12:29:26 pause-267284 crio[2709]: time="2023-07-31 12:29:26.081313432Z" level=info msg="Started container" PID=3553 containerID=36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1 description=kube-system/coredns-5d78c9869d-r5tqt/coredns id=cc559091-b89e-4b9a-a6fa-7b9f0465bd05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2e0cb8b293b9ee8c31a48e5997feec1256856929e1c68dde3418a3e8f10654c
	Jul 31 12:29:32 pause-267284 crio[2709]: time="2023-07-31 12:29:32.670685925Z" level=info msg="Stopping container: 36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1 (timeout: 30s)" id=8deaa047-abb8-4b23-91d5-fd5280f464bb name=/runtime.v1.RuntimeService/StopContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36d1a9376bdac       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   9 seconds ago       Running             coredns                   2                   d2e0cb8b293b9       coredns-5d78c9869d-r5tqt
	3cc1dd482ad59       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   28 seconds ago      Running             kindnet-cni               2                   d7b484e183e1f       kindnet-bfc8h
	e9a0cf1447820       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   28 seconds ago      Running             etcd                      2                   acfb133348ebf       etcd-pause-267284
	491b004b30ba9       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8   28 seconds ago      Running             kube-controller-manager   2                   e6a4c176ffe0a       kube-controller-manager-pause-267284
	e94cae3497e12       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a   28 seconds ago      Running             kube-proxy                2                   14acedab36a5a       kube-proxy-qrkr7
	9d384afd8c673       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473   28 seconds ago      Running             kube-apiserver            2                   e66928e22a651       kube-apiserver-pause-267284
	683efe5909502       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540   28 seconds ago      Running             kube-scheduler            2                   cedee862ab07b       kube-scheduler-pause-267284
	318a1dbf18ca7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   28 seconds ago      Running             coredns                   2                   419101fcd7188       coredns-5d78c9869d-nc82f
	1c0a7486e608f       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473   40 seconds ago      Exited              kube-apiserver            1                   e66928e22a651       kube-apiserver-pause-267284
	4158889d359bf       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a   41 seconds ago      Exited              kube-proxy                1                   14acedab36a5a       kube-proxy-qrkr7
	942ea5c65e250       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   41 seconds ago      Exited              etcd                      1                   acfb133348ebf       etcd-pause-267284
	966fcf83fa1b3       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   41 seconds ago      Exited              kindnet-cni               1                   d7b484e183e1f       kindnet-bfc8h
	d9b25e90709fd       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   41 seconds ago      Exited              coredns                   1                   d2e0cb8b293b9       coredns-5d78c9869d-r5tqt
	74ae9c2ecc8dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   41 seconds ago      Exited              coredns                   1                   419101fcd7188       coredns-5d78c9869d-nc82f
	373973d9678f5       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8   41 seconds ago      Exited              kube-controller-manager   1                   e6a4c176ffe0a       kube-controller-manager-pause-267284
	457558821ba8b       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540   41 seconds ago      Exited              kube-scheduler            1                   cedee862ab07b       kube-scheduler-pause-267284
	
	* 
	* ==> coredns [318a1dbf18ca7bf49c82d388ca78a35c60601b200318785f27ff4beff848bf3e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41995 - 35518 "HINFO IN 8787378084303816787.8563877881973228632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032476457s
	
	* 
	* ==> coredns [36d1a9376bdac32df3dbf6b66fa5abb3d633cc8635b3e26757284f09b25761c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57978 - 56709 "HINFO IN 4361633289896798266.2655127048618595159. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015186315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [74ae9c2ecc8dc4bf134f91af61ff4ec9db0e63f8935348a4ef488b7464017f2c] <==
	* 
	* 
	* ==> coredns [d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46745 - 2319 "HINFO IN 7700068273409356951.1249373602562824233. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023546521s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-267284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-267284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=pause-267284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T12_27_56_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 12:27:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-267284
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 12:29:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:27:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 12:28:40 +0000   Mon, 31 Jul 2023 12:28:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-267284
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f85dc844f7342719283693062616751
	  System UUID:                9a98f658-0a10-4566-98fd-fc96d2cf9eff
	  Boot ID:                    3709f028-2d57-4df1-ae3d-22c113dc2eeb
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-nc82f                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 coredns-5d78c9869d-r5tqt                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-pause-267284                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kindnet-bfc8h                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-pause-267284             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-pause-267284    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-qrkr7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-pause-267284             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 85s                  kube-proxy       
	  Normal   Starting                 18s                  kube-proxy       
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node pause-267284 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node pause-267284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x8 over 111s)  kubelet          Node pause-267284 status is now: NodeHasSufficientPID
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  101s                 kubelet          Node pause-267284 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s                 kubelet          Node pause-267284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s                 kubelet          Node pause-267284 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           88s                  node-controller  Node pause-267284 event: Registered Node pause-267284 in Controller
	  Normal   NodeReady                56s                  kubelet          Node pause-267284 status is now: NodeReady
	  Warning  ContainerGCFailed        41s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8s                   node-controller  Node pause-267284 event: Registered Node pause-267284 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001037] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000719] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000001a6bd468
	[  +0.001024] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +0.005951] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=0000000040ec07b0
	[  +0.001100] FS-Cache: O-key=[8] 'ede1c90000000000'
	[  +0.000739] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=00000000bcbfd487
	[  +0.001134] FS-Cache: N-key=[8] 'ede1c90000000000'
	[  +2.785467] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000006d9d7fe3
	[  +0.001098] FS-Cache: O-key=[8] 'ebe1c90000000000'
	[  +0.000685] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=0000000073926d86
	[  +0.001020] FS-Cache: N-key=[8] 'ebe1c90000000000'
	[  +0.282652] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
	[  +0.001044] FS-Cache: O-cookie d=00000000d17d7ada{9p.inode} n=000000008660d6a7
	[  +0.001083] FS-Cache: O-key=[8] 'f4e1c90000000000'
	[  +0.000746] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000d17d7ada{9p.inode} n=000000007f9efda2
	[  +0.001104] FS-Cache: N-key=[8] 'f4e1c90000000000'
	
	* 
	* ==> etcd [942ea5c65e2504aa623ea11a7f94df96dd4ff81a9805adc34d9ed32fdf7b5b13] <==
	* 
	* 
	* ==> etcd [e9a0cf1447820d06fab34678c8119dda8944ae2b19d40f520904b6e8ec63a418] <==
	* {"level":"info","ts":"2023-07-31T12:29:08.714Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T12:29:08.714Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-31T12:29:08.715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-07-31T12:29:08.717Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-07-31T12:29:08.718Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:29:08.718Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T12:29:08.733Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T12:29:08.734Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-31T12:29:08.734Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-31T12:29:10.407Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-267284 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T12:29:10.407Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T12:29:10.409Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T12:29:10.410Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  12:29:36 up 20:12,  0 users,  load average: 3.98, 3.01, 2.55
	Linux pause-267284 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [3cc1dd482ad59c3a122ac141b717f2e8ae794bd87a485ff0bafed9eb6917a650] <==
	* I0731 12:29:08.379482       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 12:29:08.380454       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0731 12:29:08.380679       1 main.go:116] setting mtu 1500 for CNI 
	I0731 12:29:08.380734       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 12:29:08.380783       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 12:29:16.027171       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0731 12:29:16.044404       1 main.go:227] handling current node
	I0731 12:29:26.068330       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0731 12:29:26.068619       1 main.go:227] handling current node
	I0731 12:29:36.113964       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0731 12:29:36.114085       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [966fcf83fa1b3f2df7e1f11ee73cf7853d74cff7ef5f8c0cbd1dd0646493eaa0] <==
	* I0731 12:28:54.829181       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 12:28:54.830944       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0731 12:28:54.831231       1 main.go:116] setting mtu 1500 for CNI 
	I0731 12:28:54.832163       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 12:28:54.832253       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [1c0a7486e608f7b790b4999d77a0c25c4b8076e51257be94ed971399e71b1db1] <==
	* 
	* 
	* ==> kube-apiserver [9d384afd8c673329ffbe1ef39275b26b78c95fa1a365c37d9b927126c6e6c673] <==
	* I0731 12:29:15.688810       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0731 12:29:15.688834       1 aggregator.go:150] waiting for initial CRD sync...
	I0731 12:29:15.688848       1 controller.go:83] Starting OpenAPI AggregationController
	I0731 12:29:15.689596       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0731 12:29:15.879959       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0731 12:29:15.879989       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0731 12:29:15.958428       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 12:29:15.969192       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0731 12:29:15.969221       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0731 12:29:15.969273       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0731 12:29:15.969778       1 shared_informer.go:318] Caches are synced for configmaps
	I0731 12:29:15.970223       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0731 12:29:15.970761       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 12:29:16.000614       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0731 12:29:16.000739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 12:29:16.015059       1 aggregator.go:152] initial CRD sync complete...
	I0731 12:29:16.028223       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 12:29:16.028308       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 12:29:16.028346       1 cache.go:39] Caches are synced for autoregister controller
	I0731 12:29:16.029727       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	E0731 12:29:16.030088       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 12:29:16.694705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 12:29:28.265680       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0731 12:29:28.580564       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 12:29:28.583895       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [373973d9678f5a505c8e98a0e030c0ef9f15d1ddf0a52c1a34a60ade3775c82a] <==
	* 
	* 
	* ==> kube-controller-manager [491b004b30ba95ba4f5623a50134925e7724bdbd0882dea6661b7c5df1841838] <==
	* I0731 12:29:28.249229       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0731 12:29:28.253137       1 shared_informer.go:318] Caches are synced for ephemeral
	I0731 12:29:28.253222       1 shared_informer.go:318] Caches are synced for TTL
	I0731 12:29:28.254413       1 shared_informer.go:318] Caches are synced for disruption
	I0731 12:29:28.256622       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0731 12:29:28.263852       1 shared_informer.go:318] Caches are synced for attach detach
	I0731 12:29:28.276418       1 shared_informer.go:318] Caches are synced for PV protection
	I0731 12:29:28.276732       1 shared_informer.go:318] Caches are synced for crt configmap
	I0731 12:29:28.279123       1 shared_informer.go:318] Caches are synced for GC
	I0731 12:29:28.295252       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0731 12:29:28.317832       1 shared_informer.go:318] Caches are synced for taint
	I0731 12:29:28.317952       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0731 12:29:28.317983       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0731 12:29:28.318025       1 taint_manager.go:211] "Sending events to api server"
	I0731 12:29:28.318053       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-267284"
	I0731 12:29:28.318091       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0731 12:29:28.318238       1 event.go:307] "Event occurred" object="pause-267284" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-267284 event: Registered Node pause-267284 in Controller"
	I0731 12:29:28.362855       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:29:28.388557       1 shared_informer.go:318] Caches are synced for stateful set
	I0731 12:29:28.407449       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 12:29:28.415211       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-r5tqt"
	I0731 12:29:28.460220       1 shared_informer.go:318] Caches are synced for daemon sets
	I0731 12:29:28.778101       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 12:29:28.778133       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0731 12:29:28.803297       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [4158889d359bff3498f692d0bee4770d0525d9bb09a6ec5516f0d47e7262038f] <==
	* 
	* 
	* ==> kube-proxy [e94cae3497e126b8f4a7aab6aee3564774d49e63b191f5bb61cffe0dc590d499] <==
	* I0731 12:29:16.352334       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0731 12:29:16.352468       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0731 12:29:16.381925       1 server_others.go:554] "Using iptables proxy"
	I0731 12:29:17.653503       1 server_others.go:192] "Using iptables Proxier"
	I0731 12:29:17.653616       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 12:29:17.653652       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 12:29:17.653694       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 12:29:17.653789       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 12:29:17.654418       1 server.go:658] "Version info" version="v1.27.3"
	I0731 12:29:17.654667       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 12:29:17.655507       1 config.go:188] "Starting service config controller"
	I0731 12:29:17.660347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 12:29:17.660450       1 config.go:97] "Starting endpoint slice config controller"
	I0731 12:29:17.660485       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 12:29:17.661150       1 config.go:315] "Starting node config controller"
	I0731 12:29:17.706902       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 12:29:17.707004       1 shared_informer.go:318] Caches are synced for node config
	I0731 12:29:17.760844       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 12:29:17.761042       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [457558821ba8b6422ab65c71240b616245289bd44aaf6202d0c8d1a2c3d8a8ba] <==
	* 
	* 
	* ==> kube-scheduler [683efe59095025b79e7d7d76c9eb66dcc965e41387a7cbadaa416d697f745030] <==
	* I0731 12:29:13.045918       1 serving.go:348] Generated self-signed cert in-memory
	I0731 12:29:17.933601       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0731 12:29:17.933705       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 12:29:17.985754       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 12:29:17.991959       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 12:29:17.992050       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 12:29:18.020220       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 12:29:17.992070       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 12:29:18.025705       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 12:29:17.992082       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 12:29:18.025815       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 12:29:18.122959       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0731 12:29:18.128235       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 12:29:18.128315       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497239    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497387    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497535    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497685    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.497842    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503042    1389 status_manager.go:809] "Failed to get status for pod" podUID=512025bc-3701-41f6-8fc5-cf18c81efbe7 pod="kube-system/kindnet-bfc8h" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-bfc8h\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503239    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503402    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503550    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503701    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.503851    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.504005    1389 status_manager.go:809] "Failed to get status for pod" podUID=1387b335b2f7944f67e1a6d412fb421d pod="kube-system/kube-controller-manager-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.504209    1389 status_manager.go:809] "Failed to get status for pod" podUID=33611319035893b2a47cdd2db1750141 pod="kube-system/kube-apiserver-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508272    1389 status_manager.go:809] "Failed to get status for pod" podUID=1387b335b2f7944f67e1a6d412fb421d pod="kube-system/kube-controller-manager-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508465    1389 status_manager.go:809] "Failed to get status for pod" podUID=33611319035893b2a47cdd2db1750141 pod="kube-system/kube-apiserver-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508637    1389 status_manager.go:809] "Failed to get status for pod" podUID=512025bc-3701-41f6-8fc5-cf18c81efbe7 pod="kube-system/kindnet-bfc8h" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-bfc8h\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508794    1389 status_manager.go:809] "Failed to get status for pod" podUID=04de0c34-3dbf-4f29-8394-6effa170a95c pod="kube-system/kube-proxy-qrkr7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qrkr7\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.508951    1389 status_manager.go:809] "Failed to get status for pod" podUID=de379dd6-9f7f-4c57-9e10-53a12f65acde pod="kube-system/coredns-5d78c9869d-nc82f" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-nc82f\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509106    1389 status_manager.go:809] "Failed to get status for pod" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1 pod="kube-system/coredns-5d78c9869d-r5tqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r5tqt\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509281    1389 status_manager.go:809] "Failed to get status for pod" podUID=5f827aa14242e9d8107d40e4c9cc1d87 pod="kube-system/kube-scheduler-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:08 pause-267284 kubelet[1389]: I0731 12:29:08.509441    1389 status_manager.go:809] "Failed to get status for pod" podUID=7bd2cfc4f48e154b67ac36a01c997137 pod="kube-system/etcd-pause-267284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-267284\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jul 31 12:29:11 pause-267284 kubelet[1389]: I0731 12:29:11.483550    1389 scope.go:115] "RemoveContainer" containerID="d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed"
	Jul 31 12:29:11 pause-267284 kubelet[1389]: E0731 12:29:11.483912    1389 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-r5tqt_kube-system(eb27a8cb-17a9-44b8-808a-3947caa530e1)\"" pod="kube-system/coredns-5d78c9869d-r5tqt" podUID=eb27a8cb-17a9-44b8-808a-3947caa530e1
	Jul 31 12:29:16 pause-267284 kubelet[1389]: W0731 12:29:16.231263    1389 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jul 31 12:29:25 pause-267284 kubelet[1389]: I0731 12:29:25.904031    1389 scope.go:115] "RemoveContainer" containerID="d9b25e90709fdc96747443a2fadfdccd536697833e7746ee6cb43c02625062ed"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-267284 -n pause-267284
helpers_test.go:261: (dbg) Run:  kubectl --context pause-267284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.18s)

                                                
                                    

Test pass (262/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.36
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.3/json-events 7.79
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.22
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
22 TestAddons/Setup 153.99
24 TestAddons/parallel/Registry 15.48
26 TestAddons/parallel/InspektorGadget 11.02
27 TestAddons/parallel/MetricsServer 5.88
30 TestAddons/parallel/CSI 62.71
31 TestAddons/parallel/Headlamp 12.67
32 TestAddons/parallel/CloudSpanner 5.73
35 TestAddons/serial/GCPAuth/Namespaces 0.19
36 TestAddons/StoppedEnableDisable 12.27
37 TestCertOptions 37.54
38 TestCertExpiration 256.08
40 TestForceSystemdFlag 39.91
41 TestForceSystemdEnv 45.28
47 TestErrorSpam/setup 30.65
48 TestErrorSpam/start 0.84
49 TestErrorSpam/status 1.12
50 TestErrorSpam/pause 1.83
51 TestErrorSpam/unpause 1.99
52 TestErrorSpam/stop 1.48
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 79.24
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.92
59 TestFunctional/serial/KubeContext 0.08
60 TestFunctional/serial/KubectlGetPods 0.12
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
64 TestFunctional/serial/CacheCmd/cache/add_local 1.23
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
69 TestFunctional/serial/CacheCmd/cache/delete 0.12
70 TestFunctional/serial/MinikubeKubectlCmd 0.39
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 33.53
73 TestFunctional/serial/ComponentHealth 0.11
74 TestFunctional/serial/LogsCmd 1.88
75 TestFunctional/serial/LogsFileCmd 1.85
76 TestFunctional/serial/InvalidService 5.01
78 TestFunctional/parallel/ConfigCmd 0.43
79 TestFunctional/parallel/DashboardCmd 11.14
80 TestFunctional/parallel/DryRun 0.67
81 TestFunctional/parallel/InternationalLanguage 0.34
82 TestFunctional/parallel/StatusCmd 1.49
86 TestFunctional/parallel/ServiceCmdConnect 11.88
87 TestFunctional/parallel/AddonsCmd 0.24
88 TestFunctional/parallel/PersistentVolumeClaim 27.43
90 TestFunctional/parallel/SSHCmd 0.82
91 TestFunctional/parallel/CpCmd 1.58
93 TestFunctional/parallel/FileSync 0.39
94 TestFunctional/parallel/CertSync 2.16
98 TestFunctional/parallel/NodeLabels 0.12
100 TestFunctional/parallel/NonActiveRuntimeDisabled 1.08
102 TestFunctional/parallel/License 0.3
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
105 TestFunctional/parallel/Version/short 0.09
106 TestFunctional/parallel/Version/components 1
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.67
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
114 TestFunctional/parallel/ImageCommands/ImageBuild 5.17
115 TestFunctional/parallel/ImageCommands/Setup 2
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.06
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.85
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.4
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.08
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
130 TestFunctional/parallel/MountCmd/any-port 8.88
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.89
133 TestFunctional/parallel/MountCmd/specific-port 2.28
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.71
135 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
137 TestFunctional/parallel/ServiceCmd/List 0.64
138 TestFunctional/parallel/ProfileCmd/profile_list 0.51
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
142 TestFunctional/parallel/ServiceCmd/Format 0.58
143 TestFunctional/parallel/ServiceCmd/URL 0.6
144 TestFunctional/delete_addon-resizer_images 0.09
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 102.95
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.54
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
157 TestJSONOutput/start/Command 78.54
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.83
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.74
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.91
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 45.7
183 TestKicCustomNetwork/use_default_bridge_network 33.22
184 TestKicExistingNetwork 34.39
185 TestKicCustomSubnet 37.8
186 TestKicStaticIP 39.61
187 TestMainNoArgs 0.06
188 TestMinikubeProfile 71
191 TestMountStart/serial/StartWithMountFirst 6.88
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 9.63
194 TestMountStart/serial/VerifyMountSecond 0.3
195 TestMountStart/serial/DeleteFirst 1.66
196 TestMountStart/serial/VerifyMountPostDelete 0.28
197 TestMountStart/serial/Stop 1.24
198 TestMountStart/serial/RestartStopped 8.14
199 TestMountStart/serial/VerifyMountPostStop 0.28
202 TestMultiNode/serial/FreshStart2Nodes 84.64
203 TestMultiNode/serial/DeployApp2Nodes 5.31
205 TestMultiNode/serial/AddNode 48.05
206 TestMultiNode/serial/ProfileList 0.35
207 TestMultiNode/serial/CopyFile 10.89
208 TestMultiNode/serial/StopNode 2.41
209 TestMultiNode/serial/StartAfterStop 12.34
210 TestMultiNode/serial/RestartKeepsNodes 123.09
211 TestMultiNode/serial/DeleteNode 5.24
212 TestMultiNode/serial/StopMultiNode 24.07
213 TestMultiNode/serial/RestartMultiNode 55.84
214 TestMultiNode/serial/ValidateNameConflict 35.96
219 TestPreload 142.79
221 TestScheduledStopUnix 110.95
224 TestInsufficientStorage 13.84
227 TestKubernetesUpgrade 394.03
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
231 TestNoKubernetes/serial/StartWithK8s 43.56
232 TestNoKubernetes/serial/StartWithStopK8s 7.76
233 TestNoKubernetes/serial/Start 8.16
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
235 TestNoKubernetes/serial/ProfileList 1.09
236 TestNoKubernetes/serial/Stop 1.28
237 TestNoKubernetes/serial/StartNoArgs 7.69
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
239 TestStoppedBinaryUpgrade/Setup 1.09
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
250 TestPause/serial/Start 84.04
259 TestNetworkPlugins/group/false 5.69
264 TestStartStop/group/old-k8s-version/serial/FirstStart 145.5
265 TestStartStop/group/old-k8s-version/serial/DeployApp 9.69
266 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.12
267 TestStartStop/group/old-k8s-version/serial/Stop 12.21
268 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
269 TestStartStop/group/old-k8s-version/serial/SecondStart 428.41
271 TestStartStop/group/no-preload/serial/FirstStart 72.95
272 TestStartStop/group/no-preload/serial/DeployApp 10.51
273 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
274 TestStartStop/group/no-preload/serial/Stop 12.12
275 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
276 TestStartStop/group/no-preload/serial/SecondStart 618.07
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.04
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
280 TestStartStop/group/old-k8s-version/serial/Pause 3.92
282 TestStartStop/group/embed-certs/serial/FirstStart 54.8
283 TestStartStop/group/embed-certs/serial/DeployApp 9.57
284 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
285 TestStartStop/group/embed-certs/serial/Stop 12.16
286 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
287 TestStartStop/group/embed-certs/serial/SecondStart 363.38
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
291 TestStartStop/group/no-preload/serial/Pause 3.42
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.65
294 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.57
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.62
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.65
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.05
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
302 TestStartStop/group/embed-certs/serial/Pause 3.43
304 TestStartStop/group/newest-cni/serial/FirstStart 40.91
305 TestStartStop/group/newest-cni/serial/DeployApp 0
306 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
307 TestStartStop/group/newest-cni/serial/Stop 1.34
308 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/newest-cni/serial/SecondStart 31.36
310 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
313 TestStartStop/group/newest-cni/serial/Pause 3.27
314 TestNetworkPlugins/group/auto/Start 77.23
315 TestNetworkPlugins/group/auto/KubeletFlags 0.32
316 TestNetworkPlugins/group/auto/NetCatPod 10.39
317 TestNetworkPlugins/group/auto/DNS 0.23
318 TestNetworkPlugins/group/auto/Localhost 0.21
319 TestNetworkPlugins/group/auto/HairPin 0.22
320 TestNetworkPlugins/group/kindnet/Start 79.89
321 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
323 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
324 TestNetworkPlugins/group/kindnet/DNS 0.42
325 TestNetworkPlugins/group/kindnet/Localhost 0.32
326 TestNetworkPlugins/group/kindnet/HairPin 0.32
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.05
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.46
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.04
331 TestNetworkPlugins/group/calico/Start 82.36
332 TestNetworkPlugins/group/custom-flannel/Start 72.11
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
334 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
335 TestNetworkPlugins/group/calico/ControllerPod 5.04
336 TestNetworkPlugins/group/custom-flannel/DNS 0.25
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
339 TestNetworkPlugins/group/calico/KubeletFlags 0.33
340 TestNetworkPlugins/group/calico/NetCatPod 11.47
341 TestNetworkPlugins/group/calico/DNS 0.32
342 TestNetworkPlugins/group/calico/Localhost 0.25
343 TestNetworkPlugins/group/calico/HairPin 0.27
344 TestNetworkPlugins/group/enable-default-cni/Start 92.57
345 TestNetworkPlugins/group/flannel/Start 74.06
346 TestNetworkPlugins/group/flannel/ControllerPod 5.04
347 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
348 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.37
349 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
350 TestNetworkPlugins/group/flannel/NetCatPod 12.57
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.37
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
354 TestNetworkPlugins/group/flannel/DNS 0.24
355 TestNetworkPlugins/group/flannel/Localhost 0.24
356 TestNetworkPlugins/group/flannel/HairPin 0.25
357 TestNetworkPlugins/group/bridge/Start 49.28
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
359 TestNetworkPlugins/group/bridge/NetCatPod 10.34
360 TestNetworkPlugins/group/bridge/DNS 0.21
361 TestNetworkPlugins/group/bridge/Localhost 0.2
362 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (9.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-593678 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-593678 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.360867208s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-593678
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-593678: exit status 85 (69.52806ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-593678 | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |          |
	|         | -p download-only-593678        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:47:31
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:47:31.339780  852555 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:47:31.339951  852555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:31.339958  852555 out.go:309] Setting ErrFile to fd 2...
	I0731 11:47:31.339964  852555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:31.340261  852555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	W0731 11:47:31.340391  852555 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16968-847174/.minikube/config/config.json: open /home/jenkins/minikube-integration/16968-847174/.minikube/config/config.json: no such file or directory
	I0731 11:47:31.340780  852555 out.go:303] Setting JSON to true
	I0731 11:47:31.341812  852555 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70199,"bootTime":1690733853,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:47:31.341879  852555 start.go:138] virtualization:  
	I0731 11:47:31.345630  852555 out.go:97] [download-only-593678] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:47:31.347636  852555 out.go:169] MINIKUBE_LOCATION=16968
	W0731 11:47:31.345908  852555 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 11:47:31.345983  852555 notify.go:220] Checking for updates...
	I0731 11:47:31.349816  852555 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:47:31.351718  852555 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:47:31.353679  852555 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:47:31.355817  852555 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 11:47:31.359943  852555 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:47:31.360321  852555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:47:31.383888  852555 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:47:31.383969  852555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:31.467391  852555 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 11:47:31.457345354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:31.467506  852555 docker.go:294] overlay module found
	I0731 11:47:31.470014  852555 out.go:97] Using the docker driver based on user configuration
	I0731 11:47:31.470039  852555 start.go:298] selected driver: docker
	I0731 11:47:31.470053  852555 start.go:898] validating driver "docker" against <nil>
	I0731 11:47:31.470158  852555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:31.538738  852555 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 11:47:31.529366059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:31.538896  852555 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 11:47:31.539170  852555 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0731 11:47:31.539319  852555 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:47:31.541866  852555 out.go:169] Using Docker driver with root privileges
	I0731 11:47:31.543733  852555 cni.go:84] Creating CNI manager for ""
	I0731 11:47:31.543779  852555 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:47:31.543793  852555 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:47:31.543811  852555 start_flags.go:319] config:
	{Name:download-only-593678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-593678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:47:31.546548  852555 out.go:97] Starting control plane node download-only-593678 in cluster download-only-593678
	I0731 11:47:31.546574  852555 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:47:31.548421  852555 out.go:97] Pulling base image ...
	I0731 11:47:31.548449  852555 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0731 11:47:31.548547  852555 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:47:31.565344  852555 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 11:47:31.565501  852555 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 11:47:31.565604  852555 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 11:47:31.619845  852555 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0731 11:47:31.619881  852555 cache.go:57] Caching tarball of preloaded images
	I0731 11:47:31.620046  852555 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0731 11:47:31.622318  852555 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0731 11:47:31.622341  852555 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0731 11:47:31.747823  852555 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-593678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (7.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-593678 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-593678 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.793146562s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (7.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-593678
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-593678: exit status 85 (71.879344ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-593678 | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |          |
	|         | -p download-only-593678        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-593678 | jenkins | v1.31.1 | 31 Jul 23 11:47 UTC |          |
	|         | -p download-only-593678        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:47:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:47:40.772444  852630 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:47:40.772581  852630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:40.772589  852630 out.go:309] Setting ErrFile to fd 2...
	I0731 11:47:40.772595  852630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:47:40.772885  852630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	W0731 11:47:40.773010  852630 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16968-847174/.minikube/config/config.json: open /home/jenkins/minikube-integration/16968-847174/.minikube/config/config.json: no such file or directory
	I0731 11:47:40.773242  852630 out.go:303] Setting JSON to true
	I0731 11:47:40.774231  852630 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70208,"bootTime":1690733853,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:47:40.774297  852630 start.go:138] virtualization:  
	I0731 11:47:40.776549  852630 out.go:97] [download-only-593678] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:47:40.778395  852630 out.go:169] MINIKUBE_LOCATION=16968
	I0731 11:47:40.776907  852630 notify.go:220] Checking for updates...
	I0731 11:47:40.781675  852630 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:47:40.783947  852630 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:47:40.785434  852630 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:47:40.788411  852630 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0731 11:47:40.791625  852630 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:47:40.792131  852630 config.go:182] Loaded profile config "download-only-593678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0731 11:47:40.792185  852630 start.go:806] api.Load failed for download-only-593678: filestore "download-only-593678": Docker machine "download-only-593678" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 11:47:40.792323  852630 driver.go:373] Setting default libvirt URI to qemu:///system
	W0731 11:47:40.792351  852630 start.go:806] api.Load failed for download-only-593678: filestore "download-only-593678": Docker machine "download-only-593678" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 11:47:40.816417  852630 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:47:40.816505  852630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:40.910156  852630 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-31 11:47:40.900138645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:40.910261  852630 docker.go:294] overlay module found
	I0731 11:47:40.912055  852630 out.go:97] Using the docker driver based on existing profile
	I0731 11:47:40.912078  852630 start.go:298] selected driver: docker
	I0731 11:47:40.912085  852630 start.go:898] validating driver "docker" against &{Name:download-only-593678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-593678 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0731 11:47:40.912314  852630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:47:40.977648  852630 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2023-07-31 11:47:40.967412299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:47:40.978118  852630 cni.go:84] Creating CNI manager for ""
	I0731 11:47:40.978131  852630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:47:40.978145  852630 start_flags.go:319] config:
	{Name:download-only-593678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-593678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:47:40.980165  852630 out.go:97] Starting control plane node download-only-593678 in cluster download-only-593678
	I0731 11:47:40.980188  852630 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:47:40.982008  852630 out.go:97] Pulling base image ...
	I0731 11:47:40.982038  852630 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:47:40.982170  852630 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:47:41.003710  852630 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 11:47:41.003813  852630 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 11:47:41.003837  852630 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 11:47:41.003843  852630 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 11:47:41.003856  852630 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 11:47:41.055769  852630 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0731 11:47:41.055794  852630 cache.go:57] Caching tarball of preloaded images
	I0731 11:47:41.055950  852630 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:47:41.057897  852630 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0731 11:47:41.057927  852630 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 ...
	I0731 11:47:41.172553  852630 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:5385d65818d7d3a2749f9dcda9541749 -> /home/jenkins/minikube-integration/16968-847174/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-593678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-593678
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-265698 --alsologtostderr --binary-mirror http://127.0.0.1:40293 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-265698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-265698
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (153.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-708039 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-708039 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m33.99228596s)
--- PASS: TestAddons/Setup (153.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 44.267149ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d4mkn" [9853603b-0beb-4b34-b0d9-acffe26828eb] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017159675s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-55qnm" [5c1d1fda-b8f3-4594-821f-583a2bd57c5e] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014475446s
addons_test.go:316: (dbg) Run:  kubectl --context addons-708039 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-708039 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-708039 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.317623683s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 ip
2023/07/31 11:50:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lhtpc" [fb74a6cb-1706-4770-89ed-05d76910aed2] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017284617s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-708039
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-708039: (6.006577217s)
--- PASS: TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 19.15477ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-pj6dz" [75434341-2f4e-41e2-966e-e15c3c9c5cee] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022359152s
addons_test.go:391: (dbg) Run:  kubectl --context addons-708039 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.001672ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-708039 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-708039 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [677be215-5716-4450-a3f8-6c025eb0dcb7] Pending
helpers_test.go:344: "task-pv-pod" [677be215-5716-4450-a3f8-6c025eb0dcb7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [677be215-5716-4450-a3f8-6c025eb0dcb7] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.019407204s
addons_test.go:560: (dbg) Run:  kubectl --context addons-708039 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-708039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-708039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-708039 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-708039 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-708039 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-708039 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-708039 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-708039 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8558e925-6b62-489d-8249-1da9b73b40e9] Pending
helpers_test.go:344: "task-pv-pod-restore" [8558e925-6b62-489d-8249-1da9b73b40e9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8558e925-6b62-489d-8249-1da9b73b40e9] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.107194379s
addons_test.go:602: (dbg) Run:  kubectl --context addons-708039 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-708039 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-708039 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-708039 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.952470259s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-708039 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-708039 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-708039 --alsologtostderr -v=1: (1.630455233s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-bl2tk" [2ad3efe7-7944-4a65-9d7d-4b4d15d33f95] Pending
helpers_test.go:344: "headlamp-66f6498c69-bl2tk" [2ad3efe7-7944-4a65-9d7d-4b4d15d33f95] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-bl2tk" [2ad3efe7-7944-4a65-9d7d-4b4d15d33f95] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.039548313s
--- PASS: TestAddons/parallel/Headlamp (12.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-qksz7" [58da67d8-69fe-4531-96ca-aa945e1e1511] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013817339s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-708039
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-708039 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-708039 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-708039
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-708039: (11.989580417s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-708039
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-708039
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-708039
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (37.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-640895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0731 12:30:44.829312  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-640895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.798202102s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-640895 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-640895 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-640895 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-640895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-640895
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-640895: (2.004419927s)
--- PASS: TestCertOptions (37.54s)

                                                
                                    
x
+
TestCertExpiration (256.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-023573 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0731 12:30:24.295898  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-023573 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.382293731s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-023573 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-023573 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.319355449s)
helpers_test.go:175: Cleaning up "cert-expiration-023573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-023573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-023573: (3.367711678s)
--- PASS: TestCertExpiration (256.08s)

                                                
                                    
x
+
TestForceSystemdFlag (39.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-198804 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-198804 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.174673946s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-198804 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-198804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-198804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-198804: (3.245264245s)
--- PASS: TestForceSystemdFlag (39.91s)

                                                
                                    
x
+
TestForceSystemdEnv (45.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-975317 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-975317 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.606760584s)
helpers_test.go:175: Cleaning up "force-systemd-env-975317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-975317
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-975317: (2.673659513s)
--- PASS: TestForceSystemdEnv (45.28s)

                                                
                                    
x
+
TestErrorSpam/setup (30.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-581572 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-581572 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-581572 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-581572 --driver=docker  --container-runtime=crio: (30.645733027s)
--- PASS: TestErrorSpam/setup (30.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 stop: (1.270610591s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-581572 --log_dir /tmp/nospam-581572 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16968-847174/.minikube/files/etc/test/nested/copy/852550/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0731 11:55:24.295903  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.302615  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.312994  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.333331  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.373628  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.453955  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.614331  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:24.934931  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:25.575851  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:26.856395  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:29.421774  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:34.542338  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:55:44.782876  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 11:56:05.263323  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-063414 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.236210134s)
--- PASS: TestFunctional/serial/StartWithProxy (79.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --alsologtostderr -v=8
E0731 11:56:46.223576  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-063414 --alsologtostderr -v=8: (42.911949208s)
functional_test.go:659: soft start took 42.915911383s for "functional-063414" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-063414 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:3.1: (1.304516529s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:3.3: (1.409056413s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 cache add registry.k8s.io/pause:latest: (1.317273719s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-063414 /tmp/TestFunctionalserialCacheCmdcacheadd_local2262586421/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache add minikube-local-cache-test:functional-063414
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache delete minikube-local-cache-test:functional-063414
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-063414
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (318.894639ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 cache reload: (1.092363226s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 kubectl -- --context functional-063414 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-063414 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-063414 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.532445148s)
functional_test.go:757: restart took 33.532551912s for "functional-063414" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-063414 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 logs: (1.876465763s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 logs --file /tmp/TestFunctionalserialLogsFileCmd1613578263/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 logs --file /tmp/TestFunctionalserialLogsFileCmd1613578263/001/logs.txt: (1.847063867s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-063414 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-063414
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-063414: exit status 115 (594.147191ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31252 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-063414 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-063414 delete -f testdata/invalidsvc.yaml: (1.087085027s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 config get cpus: exit status 14 (53.974926ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 config get cpus: exit status 14 (82.591876ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-063414 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-063414 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 879130: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-063414 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (243.777026ms)

                                                
                                                
-- stdout --
	* [functional-063414] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:33.458240  878245 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:58:33.458450  878245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:33.458475  878245 out.go:309] Setting ErrFile to fd 2...
	I0731 11:58:33.458494  878245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:33.458868  878245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 11:58:33.459320  878245 out.go:303] Setting JSON to false
	I0731 11:58:33.460468  878245 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70861,"bootTime":1690733853,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:58:33.460571  878245 start.go:138] virtualization:  
	I0731 11:58:33.466396  878245 out.go:177] * [functional-063414] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 11:58:33.468565  878245 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:58:33.471169  878245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:33.468826  878245 notify.go:220] Checking for updates...
	I0731 11:58:33.478598  878245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:58:33.480547  878245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:58:33.483156  878245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:58:33.485881  878245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:33.489546  878245 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:58:33.490358  878245 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:58:33.519758  878245 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:58:33.519872  878245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:58:33.608009  878245 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-31 11:58:33.598015854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:58:33.608144  878245 docker.go:294] overlay module found
	I0731 11:58:33.611345  878245 out.go:177] * Using the docker driver based on existing profile
	I0731 11:58:33.612869  878245 start.go:298] selected driver: docker
	I0731 11:58:33.612895  878245 start.go:898] validating driver "docker" against &{Name:functional-063414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-063414 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:58:33.613014  878245 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:33.615264  878245 out.go:177] 
	W0731 11:58:33.616784  878245 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 11:58:33.618211  878245 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-063414 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-063414 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (343.590644ms)

                                                
                                                
-- stdout --
	* [functional-063414] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:35.546276  878718 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:58:35.547050  878718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:35.547068  878718 out.go:309] Setting ErrFile to fd 2...
	I0731 11:58:35.547075  878718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:35.548527  878718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 11:58:35.549042  878718 out.go:303] Setting JSON to false
	I0731 11:58:35.550183  878718 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":70863,"bootTime":1690733853,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 11:58:35.550243  878718 start.go:138] virtualization:  
	I0731 11:58:35.552408  878718 out.go:177] * [functional-063414] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	I0731 11:58:35.554041  878718 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:58:35.555801  878718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:35.554138  878718 notify.go:220] Checking for updates...
	I0731 11:58:35.559602  878718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 11:58:35.561358  878718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 11:58:35.562994  878718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 11:58:35.564778  878718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:35.567182  878718 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:58:35.567916  878718 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:58:35.611487  878718 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:58:35.611596  878718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:58:35.741029  878718 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-07-31 11:58:35.730702514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 11:58:35.741130  878718 docker.go:294] overlay module found
	I0731 11:58:35.743200  878718 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0731 11:58:35.744904  878718 start.go:298] selected driver: docker
	I0731 11:58:35.744924  878718 start.go:898] validating driver "docker" against &{Name:functional-063414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-063414 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:58:35.745042  878718 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:35.747320  878718 out.go:177] 
	W0731 11:58:35.748812  878718 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 11:58:35.750601  878718 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-063414 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-063414 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-5z68j" [bed7feee-962a-402e-9b57-06512b27a0b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-5z68j" [bed7feee-962a-402e-9b57-06512b27a0b6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.025474613s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30612
functional_test.go:1674: http://192.168.49.2:30612: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-5z68j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30612
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2f2c34c1-1718-4d83-a306-d396891abaab] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014445365s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-063414 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-063414 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-063414 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-063414 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ccd8fc9-a125-4dd7-ab8d-4560336fbdc9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ccd8fc9-a125-4dd7-ab8d-4560336fbdc9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.015621923s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-063414 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-063414 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-063414 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4780a91a-467b-4612-93e1-9237ef4a080a] Pending
helpers_test.go:344: "sp-pod" [4780a91a-467b-4612-93e1-9237ef4a080a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4780a91a-467b-4612-93e1-9237ef4a080a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.012649708s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-063414 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh -n functional-063414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 cp functional-063414:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3628559482/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh -n functional-063414 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/852550/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /etc/test/nested/copy/852550/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/852550.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /etc/ssl/certs/852550.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/852550.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /usr/share/ca-certificates/852550.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8525502.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /etc/ssl/certs/8525502.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8525502.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /usr/share/ca-certificates/8525502.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-063414 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "sudo systemctl is-active docker": exit status 1 (545.370736ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "sudo systemctl is-active containerd": exit status 1 (535.075129ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 874386: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 version -o=json --components: (1.001257326s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-063414 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4570585a-ae40-4b3b-b2b1-e52c74c1c27e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4570585a-ae40-4b3b-b2b1-e52c74c1c27e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.04306334s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-063414 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-063414
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-063414 image ls --format short --alsologtostderr:
I0731 11:58:37.678695  879100 out.go:296] Setting OutFile to fd 1 ...
I0731 11:58:37.678895  879100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:37.678900  879100 out.go:309] Setting ErrFile to fd 2...
I0731 11:58:37.678909  879100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:37.679179  879100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
I0731 11:58:37.679810  879100 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:37.679964  879100 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:37.680560  879100 cli_runner.go:164] Run: docker container inspect functional-063414 --format={{.State.Status}}
I0731 11:58:37.703649  879100 ssh_runner.go:195] Run: systemctl --version
I0731 11:58:37.703701  879100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063414
I0731 11:58:37.739830  879100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35851 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/functional-063414/id_rsa Username:docker}
I0731 11:58:37.854691  879100 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-063414 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | 66bf2c914bf4d | 42.8MB |
| docker.io/library/nginx                 | latest             | ff78c7a65ec2b | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | bcb9e554eaab6 | 57.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 39dfb036b0986 | 116MB  |
| registry.k8s.io/kube-proxy              | v1.27.3            | fb73e92641fd5 | 68.1MB |
| gcr.io/google-containers/addon-resizer  | functional-063414  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-063414  | 3131bbf44dbf7 | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-controller-manager | v1.27.3            | ab3683b584ae5 | 109MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-063414 image ls --format table --alsologtostderr:
I0731 11:58:43.822749  879512 out.go:296] Setting OutFile to fd 1 ...
I0731 11:58:43.823170  879512 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:43.823218  879512 out.go:309] Setting ErrFile to fd 2...
I0731 11:58:43.823239  879512 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:43.823539  879512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
I0731 11:58:43.824423  879512 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:43.824775  879512 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:43.825640  879512 cli_runner.go:164] Run: docker container inspect functional-063414 --format={{.State.Status}}
I0731 11:58:43.847736  879512 ssh_runner.go:195] Run: systemctl --version
I0731 11:58:43.847863  879512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063414
I0731 11:58:43.875745  879512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35851 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/functional-063414/id_rsa Username:docker}
I0731 11:58:43.981792  879512 ssh_runner.go:195] Run: sudo crictl images --output json
2023/07/31 11:58:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-063414 image ls --format json --alsologtostderr:
[{"id":"ed4cb0f5d9bd9d076e131b63baa826987b088da7340ce707207eb4fbc8b9a7a9","repoDigests":["docker.io/library/e6e785f75732e28c458c681ca2c060ba42a092300b5ae7cd14b73f9431c31e54-tmp@sha256:f75f57d7cc1cb295c205a4398143a6705da49b0cf4c7b3f1d36800dc74a48d13"],"repoTags":[],"size":"1637643"},{"id":"ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842","repoDigests":["docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca","docker.io/library/nginx@sha256:6faff3cb6b8c141d4828ac6c884a38a680ec6ad122c19397e4774f0bb9616f0c"],"repoTags":["docker.io/library/nginx:latest"],"size":"196443408"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ab3683b5
84ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0","registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"108667702"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad
8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3131bbf44dbf789151b1d9f4f16eaeaa60920ff439960ad91d2cf623bafc904e","repoDigests":["localhost/my-image@sha256:3e644524f4156202bc460a0a8a723ee52df2709a9e797542680329c99a381ffc"],"repoTags":["localhost/my-image:functional-063414"],"size":"1640226"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":["registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.
3"],"size":"68099991"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-063414"],"size":"34114467"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io
/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5
e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":["registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"116204496"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":["registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"57615158"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe
0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42812731"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-063414 image ls --format json --alsologtostderr:
I0731 11:58:43.502445  879483 out.go:296] Setting OutFile to fd 1 ...
I0731 11:58:43.502640  879483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:43.502652  879483 out.go:309] Setting ErrFile to fd 2...
I0731 11:58:43.502658  879483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:43.502979  879483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
I0731 11:58:43.503707  879483 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:43.503878  879483 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:43.504470  879483 cli_runner.go:164] Run: docker container inspect functional-063414 --format={{.State.Status}}
I0731 11:58:43.531859  879483 ssh_runner.go:195] Run: systemctl --version
I0731 11:58:43.531928  879483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063414
I0731 11:58:43.565964  879483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35851 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/functional-063414/id_rsa Username:docker}
I0731 11:58:43.680272  879483 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-063414 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "116204496"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "68099991"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "57615158"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842
repoDigests:
- docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
- docker.io/library/nginx@sha256:6faff3cb6b8c141d4828ac6c884a38a680ec6ad122c19397e4774f0bb9616f0c
repoTags:
- docker.io/library/nginx:latest
size: "196443408"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8
repoTags:
- docker.io/library/nginx:alpine
size: "42812731"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-063414
size: "34114467"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "108667702"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-063414 image ls --format yaml --alsologtostderr:
I0731 11:58:38.015600  879135 out.go:296] Setting OutFile to fd 1 ...
I0731 11:58:38.015822  879135 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:38.015859  879135 out.go:309] Setting ErrFile to fd 2...
I0731 11:58:38.015879  879135 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:38.016222  879135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
I0731 11:58:38.017052  879135 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:38.017230  879135 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:38.017846  879135 cli_runner.go:164] Run: docker container inspect functional-063414 --format={{.State.Status}}
I0731 11:58:38.048049  879135 ssh_runner.go:195] Run: systemctl --version
I0731 11:58:38.048102  879135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063414
I0731 11:58:38.079222  879135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35851 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/functional-063414/id_rsa Username:docker}
I0731 11:58:38.194316  879135 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh pgrep buildkitd: exit status 1 (317.883819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image build -t localhost/my-image:functional-063414 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image build -t localhost/my-image:functional-063414 testdata/build --alsologtostderr: (4.571266652s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-063414 image build -t localhost/my-image:functional-063414 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ed4cb0f5d9b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-063414
--> 3131bbf44db
Successfully tagged localhost/my-image:functional-063414
3131bbf44dbf789151b1d9f4f16eaeaa60920ff439960ad91d2cf623bafc904e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-063414 image build -t localhost/my-image:functional-063414 testdata/build --alsologtostderr:
I0731 11:58:38.639810  879247 out.go:296] Setting OutFile to fd 1 ...
I0731 11:58:38.641191  879247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:38.641243  879247 out.go:309] Setting ErrFile to fd 2...
I0731 11:58:38.641266  879247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:58:38.641588  879247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
I0731 11:58:38.642283  879247 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:38.643374  879247 config.go:182] Loaded profile config "functional-063414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:58:38.643882  879247 cli_runner.go:164] Run: docker container inspect functional-063414 --format={{.State.Status}}
I0731 11:58:38.664304  879247 ssh_runner.go:195] Run: systemctl --version
I0731 11:58:38.664361  879247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063414
I0731 11:58:38.687209  879247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35851 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/functional-063414/id_rsa Username:docker}
I0731 11:58:38.777871  879247 build_images.go:151] Building image from path: /tmp/build.198913953.tar
I0731 11:58:38.777948  879247 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 11:58:38.788736  879247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.198913953.tar
I0731 11:58:38.793406  879247 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.198913953.tar: stat -c "%s %y" /var/lib/minikube/build/build.198913953.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.198913953.tar': No such file or directory
I0731 11:58:38.793434  879247 ssh_runner.go:362] scp /tmp/build.198913953.tar --> /var/lib/minikube/build/build.198913953.tar (3072 bytes)
I0731 11:58:38.825508  879247 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.198913953
I0731 11:58:38.837856  879247 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.198913953 -xf /var/lib/minikube/build/build.198913953.tar
I0731 11:58:38.849985  879247 crio.go:297] Building image: /var/lib/minikube/build/build.198913953
I0731 11:58:38.850051  879247 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-063414 /var/lib/minikube/build/build.198913953 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0731 11:58:43.122039  879247 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-063414 /var/lib/minikube/build/build.198913953 --cgroup-manager=cgroupfs: (4.271962841s)
I0731 11:58:43.122114  879247 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.198913953
I0731 11:58:43.134745  879247 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.198913953.tar
I0731 11:58:43.148895  879247 build_images.go:207] Built localhost/my-image:functional-063414 from /tmp/build.198913953.tar
I0731 11:58:43.148973  879247 build_images.go:123] succeeded building to: functional-063414
I0731 11:58:43.149002  879247 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.975073688s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-063414
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr: (3.820229339s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr: (2.619068356s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.739003146s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-063414
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image load --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr: (4.297444763s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-063414 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.106.147 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-063414 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image save gcr.io/google-containers/addon-resizer:functional-063414 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image save gcr.io/google-containers/addon-resizer:functional-063414 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.084021918s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image rm gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdany-port3288916947/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1690804679110239908" to /tmp/TestFunctionalparallelMountCmdany-port3288916947/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1690804679110239908" to /tmp/TestFunctionalparallelMountCmdany-port3288916947/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1690804679110239908" to /tmp/TestFunctionalparallelMountCmdany-port3288916947/001/test-1690804679110239908
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (545.672939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 11:57 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 11:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 11:57 test-1690804679110239908
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh cat /mount-9p/test-1690804679110239908
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-063414 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [96bf1dc6-8edd-4ce8-8fc4-491ba250bce7] Pending
helpers_test.go:344: "busybox-mount" [96bf1dc6-8edd-4ce8-8fc4-491ba250bce7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [96bf1dc6-8edd-4ce8-8fc4-491ba250bce7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [96bf1dc6-8edd-4ce8-8fc4-491ba250bce7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01489472s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-063414 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdany-port3288916947/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.566940488s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-063414
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 image save --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-063414 image save --daemon gcr.io/google-containers/addon-resizer:functional-063414 --alsologtostderr: (2.836002392s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-063414
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdspecific-port2132115212/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p"
E0731 11:58:08.144713  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.244038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdspecific-port2132115212/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "sudo umount -f /mount-9p": exit status 1 (404.776441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-063414 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdspecific-port2132115212/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T" /mount1: exit status 1 (871.296913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-063414 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-063414 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4267750037/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-063414 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-063414 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-dlt5k" [5d0df856-3ef9-4001-8574-008f2a4c567d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-dlt5k" [5d0df856-3ef9-4001-8574-008f2a4c567d] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.015648086s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "453.227148ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "60.30796ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "417.268834ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "59.654014ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service list -o json
functional_test.go:1493: Took "667.473192ms" to run "out/minikube-linux-arm64 -p functional-063414 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32409
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-063414 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32409
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-063414
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-063414
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-063414
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (102.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-604717 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0731 12:00:24.295873  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-604717 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m42.946742811s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (102.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons enable ingress --alsologtostderr -v=5: (11.537379019s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-604717 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-386407 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0731 12:04:04.056219  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-386407 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.53233957s)
--- PASS: TestJSONOutput/start/Command (78.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-386407 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-386407 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-386407 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-386407 --output=json --user=testUser: (5.914489624s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-204916 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-204916 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.256396ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"69d84223-c1f0-4e7d-b832-294021fc74a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-204916] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf400dd6-6be7-4df9-8ee9-e199eb3a010f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"ca482e58-694e-40e2-a030-92fe86cb5124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45c780a1-2ba8-4e78-9a40-e10496f1d80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig"}}
	{"specversion":"1.0","id":"0841d4b1-5aec-4259-ad58-181be70bf395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube"}}
	{"specversion":"1.0","id":"e99fd0a7-85c1-4656-913d-f897e7139800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a68c80f6-3616-4b1b-8427-1d7a5b0db201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7176a02-7cd7-4e8a-a3b8-eb3c2f3e688d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-204916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-204916
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (45.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-334419 --network=
E0731 12:05:24.295908  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:05:25.976977  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:05:44.828762  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:44.834015  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:44.844423  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:44.865508  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:44.906749  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:44.986989  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:45.147548  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:45.468044  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:46.109139  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:47.389341  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:49.950421  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:05:55.071577  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-334419 --network=: (43.508948926s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-334419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-334419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-334419: (2.167643576s)
--- PASS: TestKicCustomNetwork/create_custom_network (45.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-930655 --network=bridge
E0731 12:06:05.312306  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:06:25.792706  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-930655 --network=bridge: (31.183099826s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-930655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-930655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-930655: (2.01178549s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.22s)

                                                
                                    
x
+
TestKicExistingNetwork (34.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-399935 --network=existing-network
E0731 12:07:06.752964  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-399935 --network=existing-network: (32.187356445s)
helpers_test.go:175: Cleaning up "existing-network-399935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-399935
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-399935: (2.034071378s)
--- PASS: TestKicExistingNetwork (34.39s)

                                                
                                    
x
+
TestKicCustomSubnet (37.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-032491 --subnet=192.168.60.0/24
E0731 12:07:42.132279  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-032491 --subnet=192.168.60.0/24: (35.567962503s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-032491 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-032491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-032491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-032491: (2.203533722s)
--- PASS: TestKicCustomSubnet (37.80s)

                                                
                                    
x
+
TestKicStaticIP (39.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-837449 --static-ip=192.168.200.200
E0731 12:08:09.820270  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-837449 --static-ip=192.168.200.200: (37.276980385s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-837449 ip
helpers_test.go:175: Cleaning up "static-ip-837449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-837449
E0731 12:08:28.674521  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-837449: (2.167996424s)
--- PASS: TestKicStaticIP (39.61s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-918406 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-918406 --driver=docker  --container-runtime=crio: (33.357316847s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-921393 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-921393 --driver=docker  --container-runtime=crio: (32.417832351s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-918406
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-921393
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-921393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-921393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-921393: (1.988857405s)
helpers_test.go:175: Cleaning up "first-918406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-918406
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-918406: (1.970770141s)
--- PASS: TestMinikubeProfile (71.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-608563 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-608563 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.884565791s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-608563 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-610458 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-610458 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.624967905s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-610458 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-608563 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-608563 --alsologtostderr -v=5: (1.663702761s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-610458 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-610458
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-610458: (1.240400969s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-610458
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-610458: (7.136504147s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-610458 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951087 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 12:10:24.296212  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:10:44.829220  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:11:12.515531  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951087 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.086108839s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-951087 -- rollout status deployment/busybox: (3.121406845s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-bbjrl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951087 -- exec busybox-67b7f59bb-sssw6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-951087 -v 3 --alsologtostderr
E0731 12:11:47.345409  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-951087 -v 3 --alsologtostderr: (47.332407924s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp testdata/cp-test.txt multinode-951087:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1171499464/001/cp-test_multinode-951087.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087:/home/docker/cp-test.txt multinode-951087-m02:/home/docker/cp-test_multinode-951087_multinode-951087-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test_multinode-951087_multinode-951087-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087:/home/docker/cp-test.txt multinode-951087-m03:/home/docker/cp-test_multinode-951087_multinode-951087-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test_multinode-951087_multinode-951087-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp testdata/cp-test.txt multinode-951087-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1171499464/001/cp-test_multinode-951087-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m02:/home/docker/cp-test.txt multinode-951087:/home/docker/cp-test_multinode-951087-m02_multinode-951087.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test_multinode-951087-m02_multinode-951087.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m02:/home/docker/cp-test.txt multinode-951087-m03:/home/docker/cp-test_multinode-951087-m02_multinode-951087-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test_multinode-951087-m02_multinode-951087-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp testdata/cp-test.txt multinode-951087-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1171499464/001/cp-test_multinode-951087-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m03:/home/docker/cp-test.txt multinode-951087:/home/docker/cp-test_multinode-951087-m03_multinode-951087.txt
E0731 12:12:42.132190  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087 "sudo cat /home/docker/cp-test_multinode-951087-m03_multinode-951087.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 cp multinode-951087-m03:/home/docker/cp-test.txt multinode-951087-m02:/home/docker/cp-test_multinode-951087-m03_multinode-951087-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 ssh -n multinode-951087-m02 "sudo cat /home/docker/cp-test_multinode-951087-m03_multinode-951087-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-951087 node stop m03: (1.270264035s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951087 status: exit status 7 (571.029637ms)

                                                
                                                
-- stdout --
	multinode-951087
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951087-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951087-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr: exit status 7 (569.500462ms)

                                                
                                                
-- stdout --
	multinode-951087
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951087-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951087-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:45.980467  925886 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:12:45.980655  925886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:45.980663  925886 out.go:309] Setting ErrFile to fd 2...
	I0731 12:12:45.980668  925886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:45.980956  925886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:12:45.981143  925886 out.go:303] Setting JSON to false
	I0731 12:12:45.981225  925886 mustload.go:65] Loading cluster: multinode-951087
	I0731 12:12:45.981320  925886 notify.go:220] Checking for updates...
	I0731 12:12:45.981611  925886 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:12:45.981638  925886 status.go:255] checking status of multinode-951087 ...
	I0731 12:12:45.982610  925886 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:12:46.005907  925886 status.go:330] multinode-951087 host status = "Running" (err=<nil>)
	I0731 12:12:46.005937  925886 host.go:66] Checking if "multinode-951087" exists ...
	I0731 12:12:46.006257  925886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087
	I0731 12:12:46.027971  925886 host.go:66] Checking if "multinode-951087" exists ...
	I0731 12:12:46.028330  925886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:12:46.028384  925886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087
	I0731 12:12:46.063862  925886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35916 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087/id_rsa Username:docker}
	I0731 12:12:46.158619  925886 ssh_runner.go:195] Run: systemctl --version
	I0731 12:12:46.164443  925886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:12:46.179318  925886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:12:46.253084  925886 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-31 12:12:46.242893223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:12:46.253769  925886 kubeconfig.go:92] found "multinode-951087" server: "https://192.168.58.2:8443"
	I0731 12:12:46.253795  925886 api_server.go:166] Checking apiserver status ...
	I0731 12:12:46.253840  925886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:12:46.267371  925886 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1270/cgroup
	I0731 12:12:46.279415  925886 api_server.go:182] apiserver freezer: "11:freezer:/docker/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/crio/crio-1291cea841ceb7e716413a5016086171a49b16c0a653fedd6d18b48a5e012246"
	I0731 12:12:46.279528  925886 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a8d3aff5e733e121ed34bafca8a471542ec46bd04d1fe3366b9e0d8f0426fac/crio/crio-1291cea841ceb7e716413a5016086171a49b16c0a653fedd6d18b48a5e012246/freezer.state
	I0731 12:12:46.290822  925886 api_server.go:204] freezer state: "THAWED"
	I0731 12:12:46.290873  925886 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 12:12:46.299970  925886 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 12:12:46.299996  925886 status.go:421] multinode-951087 apiserver status = Running (err=<nil>)
	I0731 12:12:46.300008  925886 status.go:257] multinode-951087 status: &{Name:multinode-951087 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 12:12:46.300027  925886 status.go:255] checking status of multinode-951087-m02 ...
	I0731 12:12:46.300361  925886 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Status}}
	I0731 12:12:46.319355  925886 status.go:330] multinode-951087-m02 host status = "Running" (err=<nil>)
	I0731 12:12:46.319383  925886 host.go:66] Checking if "multinode-951087-m02" exists ...
	I0731 12:12:46.319685  925886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951087-m02
	I0731 12:12:46.339268  925886 host.go:66] Checking if "multinode-951087-m02" exists ...
	I0731 12:12:46.339586  925886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 12:12:46.339635  925886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951087-m02
	I0731 12:12:46.368769  925886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35921 SSHKeyPath:/home/jenkins/minikube-integration/16968-847174/.minikube/machines/multinode-951087-m02/id_rsa Username:docker}
	I0731 12:12:46.458492  925886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:12:46.472726  925886 status.go:257] multinode-951087-m02 status: &{Name:multinode-951087-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 12:12:46.472760  925886 status.go:255] checking status of multinode-951087-m03 ...
	I0731 12:12:46.473071  925886 cli_runner.go:164] Run: docker container inspect multinode-951087-m03 --format={{.State.Status}}
	I0731 12:12:46.491533  925886 status.go:330] multinode-951087-m03 host status = "Stopped" (err=<nil>)
	I0731 12:12:46.491561  925886 status.go:343] host is not running, skipping remaining checks
	I0731 12:12:46.491569  925886 status.go:257] multinode-951087-m03 status: &{Name:multinode-951087-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-951087 node start m03 --alsologtostderr: (11.501889125s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951087
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-951087
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-951087: (25.129977247s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951087 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951087 --wait=true -v=8 --alsologtostderr: (1m37.813113195s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951087
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-951087 node delete m03: (4.365648811s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 stop
E0731 12:15:24.295966  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-951087 stop: (23.878324854s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951087 status: exit status 7 (106.147585ms)

                                                
                                                
-- stdout --
	multinode-951087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951087-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr: exit status 7 (89.891734ms)

                                                
                                                
-- stdout --
	multinode-951087
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951087-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:15:31.201994  934066 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:15:31.202122  934066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:31.202130  934066 out.go:309] Setting ErrFile to fd 2...
	I0731 12:15:31.202135  934066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:31.202391  934066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:15:31.202556  934066 out.go:303] Setting JSON to false
	I0731 12:15:31.202631  934066 mustload.go:65] Loading cluster: multinode-951087
	I0731 12:15:31.202703  934066 notify.go:220] Checking for updates...
	I0731 12:15:31.203063  934066 config.go:182] Loaded profile config "multinode-951087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:15:31.203073  934066 status.go:255] checking status of multinode-951087 ...
	I0731 12:15:31.204213  934066 cli_runner.go:164] Run: docker container inspect multinode-951087 --format={{.State.Status}}
	I0731 12:15:31.222038  934066 status.go:330] multinode-951087 host status = "Stopped" (err=<nil>)
	I0731 12:15:31.222062  934066 status.go:343] host is not running, skipping remaining checks
	I0731 12:15:31.222069  934066 status.go:257] multinode-951087 status: &{Name:multinode-951087 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 12:15:31.222092  934066 status.go:255] checking status of multinode-951087-m02 ...
	I0731 12:15:31.222379  934066 cli_runner.go:164] Run: docker container inspect multinode-951087-m02 --format={{.State.Status}}
	I0731 12:15:31.241066  934066 status.go:330] multinode-951087-m02 host status = "Stopped" (err=<nil>)
	I0731 12:15:31.241086  934066 status.go:343] host is not running, skipping remaining checks
	I0731 12:15:31.241094  934066 status.go:257] multinode-951087-m02 status: &{Name:multinode-951087-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951087 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 12:15:44.829270  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951087 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.078735747s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951087 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951087
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951087-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-951087-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.458898ms)

                                                
                                                
-- stdout --
	* [multinode-951087-m02] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-951087-m02' is duplicated with machine name 'multinode-951087-m02' in profile 'multinode-951087'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951087-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951087-m03 --driver=docker  --container-runtime=crio: (33.382309737s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-951087
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-951087: exit status 80 (380.518041ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-951087
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-951087-m03 already exists in multinode-951087-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-951087-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-951087-m03: (2.05145189s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.96s)

                                                
                                    
x
+
TestPreload (142.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-189015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 12:17:42.132171  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-189015 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m25.039971233s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-189015 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-189015 image pull gcr.io/k8s-minikube/busybox: (2.109731932s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-189015
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-189015: (5.92367751s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-189015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0731 12:19:05.180787  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-189015 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.988646605s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-189015 image list
helpers_test.go:175: Cleaning up "test-preload-189015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-189015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-189015: (2.426783404s)
--- PASS: TestPreload (142.79s)

                                                
                                    
x
+
TestScheduledStopUnix (110.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-966571 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-966571 --memory=2048 --driver=docker  --container-runtime=crio: (34.668527064s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-966571 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-966571 -n scheduled-stop-966571
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-966571 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-966571 --cancel-scheduled
E0731 12:20:24.296376  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-966571 -n scheduled-stop-966571
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-966571
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-966571 --schedule 15s
E0731 12:20:44.828639  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-966571
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-966571: exit status 7 (75.568685ms)

                                                
                                                
-- stdout --
	scheduled-stop-966571
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-966571 -n scheduled-stop-966571
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-966571 -n scheduled-stop-966571: exit status 7 (70.778937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-966571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-966571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-966571: (4.611118171s)
--- PASS: TestScheduledStopUnix (110.95s)

                                                
                                    
x
+
TestInsufficientStorage (13.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-714934 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-714934 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.255032047s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6019f83-1ec8-4fdf-8392-f8f82c9ebae1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-714934] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90eb754b-1aa3-4493-ad82-325dbd02b0e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"b377c0db-a7d8-4cc6-9d8f-2f31ad23272c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"adbcc6a6-1f05-4370-ab27-beb48ac91fd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig"}}
	{"specversion":"1.0","id":"acce2bdc-46cc-4593-ad65-5e322857b602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube"}}
	{"specversion":"1.0","id":"77588c50-d1b2-469d-9e6e-b114f585d1c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0a393b8f-2908-4e2a-a2ce-f0c7ee500d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b8cadea0-4663-4802-a1a5-0e29c88a2823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a5d3c2d6-8593-4039-981c-f076bf9c613f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"eebcb549-af00-4075-acf1-8b93aa3d6591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2763ab12-4b95-44b5-9448-d65af404ba79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"788d0872-c79d-463f-a8d0-bbf0bffad024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-714934 in cluster insufficient-storage-714934","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"410864f3-a62e-450d-bd37-d6c2ebfdaa28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"be254955-0989-45b0-a4a3-7f93e4e91542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a31278d-7b7b-4c76-a0a6-74856adfb206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714934 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714934 --output=json --layout=cluster: exit status 7 (331.870975ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714934","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714934","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 12:21:35.114608  950861 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-714934" does not appear in /home/jenkins/minikube-integration/16968-847174/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714934 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714934 --output=json --layout=cluster: exit status 7 (327.668344ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714934","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714934","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 12:21:35.447317  950918 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-714934" does not appear in /home/jenkins/minikube-integration/16968-847174/kubeconfig
	E0731 12:21:35.460338  950918 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/insufficient-storage-714934/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-714934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-714934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-714934: (1.925174487s)
--- PASS: TestInsufficientStorage (13.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.137991532s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-047034
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-047034: (1.86835396s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-047034 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-047034 status --format={{.Host}}: exit status 7 (76.084606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.751382217s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-047034 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (83.047219ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-047034] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-047034
	    minikube start -p kubernetes-upgrade-047034 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0470342 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-047034 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-047034 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.736061173s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-047034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-047034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-047034: (2.236764234s)
--- PASS: TestKubernetesUpgrade (394.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.836376ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-522344] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-522344 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-522344 --driver=docker  --container-runtime=crio: (43.034608602s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-522344 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --driver=docker  --container-runtime=crio: (5.406160544s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-522344 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-522344 status -o json: exit status 2 (315.670477ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-522344","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-522344
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-522344: (2.035638369s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-522344 --no-kubernetes --driver=docker  --container-runtime=crio: (8.164252393s)
--- PASS: TestNoKubernetes/serial/Start (8.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-522344 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-522344 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.633893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-522344
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-522344: (1.275566874s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-522344 --driver=docker  --container-runtime=crio
E0731 12:22:42.132661  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-522344 --driver=docker  --container-runtime=crio: (7.685930597s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-522344 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-522344 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.092573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-379049
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestPause/serial/Start (84.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-267284 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0731 12:27:42.132139  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:28:27.345602  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-267284 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.042420208s)
--- PASS: TestPause/serial/Start (84.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-240918 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-240918 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (409.1131ms)

                                                
                                                
-- stdout --
	* [false-240918] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:29:44.991136  989850 out.go:296] Setting OutFile to fd 1 ...
	I0731 12:29:44.991375  989850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:44.992360  989850 out.go:309] Setting ErrFile to fd 2...
	I0731 12:29:44.992388  989850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:44.992703  989850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-847174/.minikube/bin
	I0731 12:29:44.993160  989850 out.go:303] Setting JSON to false
	I0731 12:29:44.994334  989850 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":72732,"bootTime":1690733853,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0731 12:29:44.994424  989850 start.go:138] virtualization:  
	I0731 12:29:44.997213  989850 out.go:177] * [false-240918] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0731 12:29:45.000095  989850 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 12:29:45.002521  989850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:29:45.000258  989850 notify.go:220] Checking for updates...
	I0731 12:29:45.007560  989850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-847174/kubeconfig
	I0731 12:29:45.014005  989850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-847174/.minikube
	I0731 12:29:45.016140  989850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0731 12:29:45.018193  989850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:29:45.036545  989850 config.go:182] Loaded profile config "force-systemd-flag-198804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 12:29:45.036808  989850 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 12:29:45.144329  989850 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 12:29:45.144549  989850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 12:29:45.316703  989850 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-31 12:29:45.286234677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0731 12:29:45.316840  989850 docker.go:294] overlay module found
	I0731 12:29:45.318847  989850 out.go:177] * Using the docker driver based on user configuration
	I0731 12:29:45.320394  989850 start.go:298] selected driver: docker
	I0731 12:29:45.320416  989850 start.go:898] validating driver "docker" against <nil>
	I0731 12:29:45.320431  989850 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:29:45.322736  989850 out.go:177] 
	W0731 12:29:45.324476  989850 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 12:29:45.326443  989850 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-240918 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-240918

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240918"

                                                
                                                
----------------------- debugLogs end: false-240918 [took: 5.021843257s] --------------------------------
helpers_test.go:175: Cleaning up "false-240918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-240918
--- PASS: TestNetworkPlugins/group/false (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (145.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-601613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0731 12:32:42.132821  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-601613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m25.498877942s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (145.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-601613 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d695f879-352f-4c95-bf59-c5825d9b6204] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d695f879-352f-4c95-bf59-c5825d9b6204] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.036202917s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-601613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-601613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-601613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.958969833s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-601613 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-601613 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-601613 --alsologtostderr -v=3: (12.208603181s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-601613 -n old-k8s-version-601613
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-601613 -n old-k8s-version-601613: exit status 7 (132.315624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-601613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (428.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-601613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-601613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m7.797080152s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-601613 -n old-k8s-version-601613
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (428.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-026642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:35:24.295719  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-026642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m12.949097943s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-026642 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc1dbfaf-4559-4fbd-93ab-01c63f809043] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc1dbfaf-4559-4fbd-93ab-01c63f809043] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.030260722s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-026642 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-026642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-026642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.098873405s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-026642 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-026642 --alsologtostderr -v=3
E0731 12:35:44.829599  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:35:45.181091  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-026642 --alsologtostderr -v=3: (12.119017098s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-026642 -n no-preload-026642
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-026642 -n no-preload-026642: exit status 7 (80.241067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-026642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (618.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-026642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:37:42.132095  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:38:47.876716  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:40:24.295773  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:40:44.829098  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-026642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m17.676903007s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-026642 -n no-preload-026642
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (618.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nh2r" [f7259b14-dbbb-4b31-ba54-46a2d09603d6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.037255436s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nh2r" [f7259b14-dbbb-4b31-ba54-46a2d09603d6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009907633s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-601613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-601613 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-601613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-601613 -n old-k8s-version-601613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-601613 -n old-k8s-version-601613: exit status 2 (397.214643ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-601613 -n old-k8s-version-601613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-601613 -n old-k8s-version-601613: exit status 2 (373.437236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-601613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-601613 --alsologtostderr -v=1: (1.016118873s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-601613 -n old-k8s-version-601613
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-601613 -n old-k8s-version-601613
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-863103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-863103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (54.798012331s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863103 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8134bb4e-f373-4ebc-90c1-199cfd703b95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8134bb4e-f373-4ebc-90c1-199cfd703b95] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0341921s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-863103 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-863103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-863103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.219137049s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-863103 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-863103 --alsologtostderr -v=3
E0731 12:42:42.132826  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-863103 --alsologtostderr -v=3: (12.156767431s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-863103 -n embed-certs-863103
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-863103 -n embed-certs-863103: exit status 7 (77.582077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-863103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (363.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-863103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:43:44.353426  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.358782  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.369057  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.389424  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.430073  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.510506  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.670863  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:44.992004  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:45.632979  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:46.913192  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:49.473400  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:43:54.593592  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:44:04.834525  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:44:25.314729  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:45:06.275634  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:45:07.346691  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:45:24.296039  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:45:44.829215  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-863103 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (6m2.755654805s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-863103 -n embed-certs-863103
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (363.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mdgkn" [4e1ab9de-f8fa-4027-99cb-bb5715e59c00] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026602127s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mdgkn" [4e1ab9de-f8fa-4027-99cb-bb5715e59c00] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011360386s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-026642 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-026642 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-026642 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-026642 -n no-preload-026642
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-026642 -n no-preload-026642: exit status 2 (354.128115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-026642 -n no-preload-026642
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-026642 -n no-preload-026642: exit status 2 (336.061444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-026642 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-026642 -n no-preload-026642
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-026642 -n no-preload-026642
E0731 12:46:28.196448  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-999498 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:47:42.132866  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-999498 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m19.648941788s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-999498 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [264d704a-4673-43a6-9b2b-c7b07a6eecb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [264d704a-4673-43a6-9b2b-c7b07a6eecb3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.031600907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-999498 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-999498 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-999498 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.459014097s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-999498 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-999498 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-999498 --alsologtostderr -v=3: (12.322420947s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498: exit status 7 (105.259364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-999498 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-999498 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:48:44.352848  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-999498 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m45.723844746s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kdwr2" [55b3a769-a7e2-4c8d-85d0-32d45f0eeea1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.052788488s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kdwr2" [55b3a769-a7e2-4c8d-85d0-32d45f0eeea1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015095601s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-863103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-863103 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-863103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-863103 -n embed-certs-863103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-863103 -n embed-certs-863103: exit status 2 (340.089366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-863103 -n embed-certs-863103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-863103 -n embed-certs-863103: exit status 2 (359.42064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-863103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-863103 -n embed-certs-863103
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-863103 -n embed-certs-863103
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-438542 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-438542 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (40.914148772s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-438542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-438542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117174317s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-438542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-438542 --alsologtostderr -v=3: (1.340266843s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-438542 -n newest-cni-438542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-438542 -n newest-cni-438542: exit status 7 (74.500481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-438542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-438542 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 12:50:24.295456  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-438542 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (30.884711171s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-438542 -n newest-cni-438542
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-438542 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-438542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-438542 -n newest-cni-438542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-438542 -n newest-cni-438542: exit status 2 (368.990289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-438542 -n newest-cni-438542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-438542 -n newest-cni-438542: exit status 2 (378.565749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-438542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-438542 -n newest-cni-438542
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-438542 -n newest-cni-438542
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0731 12:50:35.253323  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
E0731 12:50:37.814872  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
E0731 12:50:42.935527  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
E0731 12:50:44.829412  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:50:53.175748  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
E0731 12:51:13.656513  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.22560399s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pmftc" [4f0ae9b1-5798-4776-b07f-8ea94b1bc062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 12:51:54.617312  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-pmftc" [4f0ae9b1-5798-4776-b07f-8ea94b1bc062] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.013440001s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0731 12:52:25.181808  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:52:42.132064  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
E0731 12:53:16.538124  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.887644922s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nxzd7" [9bdee711-1f95-479f-9328-9e535b1d641b] Running
E0731 12:53:44.352806  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.050516757s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-ks9nw" [2beb02fe-568b-4217-b8dc-104f7c5ed2c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-ks9nw" [2beb02fe-568b-4217-b8dc-104f7c5ed2c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.014242207s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mq6bw" [15d14212-108c-4a33-a1d2-7e59dc330ee2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mq6bw" [15d14212-108c-4a33-a1d2-7e59dc330ee2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.047560242s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mq6bw" [15d14212-108c-4a33-a1d2-7e59dc330ee2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0126621s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-999498 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-999498 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-999498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-999498 --alsologtostderr -v=1: (1.225318908s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498: exit status 2 (501.663217ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498: exit status 2 (450.184334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-999498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-999498 --alsologtostderr -v=1: (1.205720368s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-999498 -n default-k8s-diff-port-999498
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)
E0731 12:58:44.088550  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.093804  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.104059  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.124398  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.164645  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.244900  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.353068  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/old-k8s-version-601613/client.crt: no such file or directory
E0731 12:58:44.405322  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:44.725883  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:45.366074  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:46.646280  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:49.206445  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:58:54.326605  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:59:04.567084  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
E0731 12:59:13.623858  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m22.363061486s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0731 12:55:24.296027  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/addons-708039/client.crt: no such file or directory
E0731 12:55:27.876865  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
E0731 12:55:32.689543  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.111014915s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mrdph" [263c14dc-0a4c-4be3-88ae-246e429f05ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 12:55:44.828537  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/ingress-addon-legacy-604717/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-mrdph" [263c14dc-0a4c-4be3-88ae-246e429f05ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.01659476s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-blxbg" [bf8cb9dc-439f-48fb-adb1-98e1aae0410b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037414346s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-4wspx" [f3834a2e-6997-4340-b80c-d27bd6fe8a49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-4wspx" [f3834a2e-6997-4340-b80c-d27bd6fe8a49] Running
E0731 12:56:00.378919  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/no-preload-026642/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.01604098s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.566331701s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0731 12:56:52.030889  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.036232  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.046649  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.067803  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.108406  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.189380  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.349724  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:52.670738  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:53.311689  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:54.592693  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:56:57.153567  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:57:02.274296  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:57:12.515059  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:57:32.995517  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/auto-240918/client.crt: no such file or directory
E0731 12:57:42.133126  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/functional-063414/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.055173539s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q729z" [a5dcaeb6-ff3e-41cc-a6e4-78be87bffed5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.033854924s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8q88x" [5c8a05d3-ee54-4b0c-a7d4-ed5551716d07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 12:57:51.700365  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:51.705637  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:51.715896  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:51.736097  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:51.776342  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:51.856626  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:52.017629  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
E0731 12:57:52.338768  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-8q88x" [5c8a05d3-ee54-4b0c-a7d4-ed5551716d07] Running
E0731 12:57:56.821685  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.028681329s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-240918 "pgrep -a kubelet"
E0731 12:57:52.979834  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p9nfg" [18df0fd7-42d6-4e67-85bc-d03eaae1d67c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 12:57:54.260986  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/default-k8s-diff-port-999498/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-p9nfg" [18df0fd7-42d6-4e67-85bc-d03eaae1d67c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.010199153s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-240918 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (49.275809606s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-240918 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-240918 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-djchv" [96b5d279-9b3a-4363-86ee-ec5555cc5cfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-djchv" [96b5d279-9b3a-4363-86ee-ec5555cc5cfd] Running
E0731 12:59:25.048220  852550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-847174/.minikube/profiles/kindnet-240918/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011112828s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-240918 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-240918 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-307334 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-307334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-307334
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-181969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-181969
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-240918 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-240918

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240918"

                                                
                                                
----------------------- debugLogs end: kubenet-240918 [took: 4.173410096s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-240918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-240918
--- SKIP: TestNetworkPlugins/group/kubenet (4.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-240918 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-240918" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-240918

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-240918" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240918"

                                                
                                                
----------------------- debugLogs end: cilium-240918 [took: 4.84969852s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-240918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-240918
--- SKIP: TestNetworkPlugins/group/cilium (5.10s)

                                                
                                    
Copied to clipboard